Guides
...
Running on mainnet
Configuration file
1 min
the standard mainnet configuration file without any adjustments using the sqlite database, and ready to go looks like this node default ####################################################################### \# either edit this file to suit your needs or make a copy and name it # \# 'node properties' # \# # \# lines starting with '#' are ignored, remove this character to # \# activate the command # \# # \# settings on a 'node properties' file overwrite the values on the # \# 'node default properties' # \####################################################################### \# \# integer parameters can be \# decimal 123 \# binary 0b100101 \# hexadecimal 0xaf1d \# \# boolean parameters can be \# 1, true, yes, on \# 0, false, no, off \# (case insensitive) \# \#### cashback for transaction fees #### \## set an account id and get a 25% cashback on every transaction fee that is \## created by this node \# node cashbackid = 8952122635653861124 \#### database section #### \## maximum allowed connections by database connection pool \# db connections = 30 \## if a database, i e sqlite, supports optimization features, like shutdown defrag, vacuum etc this might require some additional shutdown or startup time (depending on the database) \## this can help improve performance and reduce the size of the database file on disk \## this is "on" by default but can cost some time while starting/shutting down \# db optimize = off \## if you want to use sqlite (recommended for local/non public nodes) \# db url=jdbc\ sqlite\ file /db/signum sqlite db \## sqlite journaling mode \## https //www sqlite org/pragma html#pragma journal mode \## possible values are delete,truncate,persist,wal (default, recommended) \## wal may occupy up to twice more disc space than others while running the node, but allows read concurrency and usually better performance (see more here https //www sqlite org/wal html) \## it's highly recommended to use wal mode during syncing, to dramatically reduce i/o operations and though faster sync times \## info memory journal mode is not supported \# db sqlitejournalmode = wal \## if you want to use mariadb (recommended for public nodes) \# db url=jdbc\ mariadb //localhost 3306/signum \# db username=signumnode \# db password=s1gn00m n0d3 \## if you want to use postgres (experimental considered alternative for mariadb) \# db url=jdbc\ postgresql //localhost 5432/signum?sslmode=disabled \# db username=signumnode \# db password=s1gn00m n0d3 \#### peer 2 peer networking #### \## announce my ip address/hostname to peers and allow them to share it with other peers \## if disabled, peer networking servlet will not be started at all \# p2p sharemyaddress = yes \## my externally visible ip address or host name, to be announced to peers \## it can optionally include a port number, which will also be announced to peers, \## and may be different from p2p port (useful if you do port forwarding behind a router) \# p2p myaddress = \## host interface on which to listen for peer networking requests, default all \## use 0 0 0 0 to listen on all ipv4 and ipv6 interfaces \# p2p listen = 0 0 0 0 \## port for incoming peer to peer networking requests, if enabled \# p2p port = 8123 \## use upnp portforwarding \## set to no on a server setup \# p2p upnp = yes \## my platform, to be announced to peers \## enter your signum address here for snr rewards, see here https //wiki signum network/signum snr awards/ \# p2p myplatform = pc \## a list of peer addresses / host names, separated by '; ' used for faster p2p networking bootstrap \# p2p bootstrappeers = australia signum network 8123; brazil signum network 8123; canada signum network 8123; europe signum network 8123; europe1 signum network 8123; europe2 signum network 8123; europe3 signum network 8123; latam signum network 8123; singapore signum network 8123; ru signum network 8123; us central signum network 8123; us east signum network 8123 \## these peers will always be sent rebroadcast transactions they are also automatically added to p2p bootstrappeers, so no need for duplicates \# p2p rebroadcastto = 216 114 232 67 8123; 51 235 143 229 8123; signode ddns net 8123; 188 34 159 176 8123;signum mega bit ru 8123; storjserver2 cryptomass de 8123; 89 58 10 207 8123; 84 54 46 176 8123; signumwallet ddns net 8123; taylorforce synology me 8123; zwurg feste ip net 51940; zmail cloudns ph 8123; wallet signa coin eu 8123; wekuz signa node duckdns org 8123; austria sn albatros cc 8123; signumwallet lucentinian com 8123; 85 238 97 205 8123; 124 246 79 194 8123 \## connect to this many bootstrap connection peers before using the peer database to get connected faster please be aware, that higher != better (3 5 are usually good values) set to 0 to disable \# p2p numbootstrapconnections = 3 \## known bad peers to be blacklisted \# p2p blacklistedpeers = \## maintain active connections with at least that many peers also more != better (you want good peers, not just many) \# p2p maxconnections = 20 \## maximum number of blocks sent to other peers in a single request \# p2p maxblocks = 720 \## use peers database? (only if not in offline mode) \# p2p usepeersdb = yes \## save known peers in the peersdb? (only if p2p usepeersdb is true) \# p2p savepeers = yes \## set to false to disable getting more peers from the currently connected peers only useful \## when debugging and want to limit the peers to those in peersdb or p2p bootstrappeers \# p2p getmorepeers = yes \## if database of peers exceed this value more peers will not be downloaded \## this value will never be below maxconnections to high value will slowdown connections \# p2p getmorepeersthreshold = 400 \## peer networking connect timeout for outgoing connections \# p2p timeoutconnect ms = 4000 \## peer networking read timeout for outgoing connections \# p2p timeoutread ms = 8000 \## peer networking server idle timeout, milliseconds \# p2p timeoutidle ms = 30000 \## blacklist peers for 600000 milliseconds (i e 10 minutes by default) \# p2p blacklistingtime ms = 600000 \## enable priority (re )broadcasting of transactions when enabled incoming transactions \## will be priority resent to the rebroadcast targets \# p2p enabletxrebroadcast = yes \## amount of extra peers to send a transaction to after sending to all rebroadcast targets \# p2p sendtolimit=10 \## max number of unconfirmed transactions that will be kept in cache \# p2p maxunconfirmedtransactions = 8192 \## max percentage of unconfirmed transactions that have a full hash reference to another transaction kept in cache \# p2p maxunconfirmedtransactionsfullhashreferencepercentage = 5 \## max amount of raw ut bytes we will send to someone through both push and pull keep in mind that the resulting json size will always be bigger \# p2p maxutrawsizebytestosend = 175000 \## jetty pass through options \## p2p section \# jetty p2p dosfilter = on \# jetty p2p dosfilter maxrequestspersec = 30 \# jetty p2p dosfilter delayms = 500 \# jetty p2p dosfilter maxrequestms = 300000 \# jetty p2p dosfilter throttlems = 30000 \# jetty p2p dosfilter maxidletrackerms = 30000 \# jetty p2p dosfilter maxwaitms = 50 \# jetty p2p dosfilter throttledrequests = 5 \# jetty p2p dosfilter insertheaders = true \# jetty p2p dosfilter tracksessions = false \# jetty p2p dosfilter remoteport = false \# jetty p2p dosfilter ipwhitelist = 127 0 0 1,localhost \# jetty p2p dosfilter managedattr = true \## jetty passthrough parameters for p2p responses gzip compression \# jetty p2p gzipfilter = on \# jetty p2p gzipfilter mingzipsize = 1024 \## size of the download cache for blocks \# node blockcachemb = 40 \## add this to check the deadline of every block since genesis, otherwise only past the checkpoint \# node checkpointheight = 1 \## number of past blocks for at processor to load into memory/cache \## put 1, if you want to disable the cache, which may slow down at/smart contract processing significantly \## do not put too high values as this may cause significant memory occupation and cause even negative impact on processing times \# node atprocessorcacheblockcount = 1000 \#### api server #### \## accept http/json api requests \# api server = on \## jetty pass through options \## api section \# jetty api dosfilter = on \# jetty api dosfilter maxrequestspersec = 30 \# jetty api dosfilter delayms = 500 \# jetty api dosfilter maxrequestms = 30000 \# jetty api dosfilter throttlems = 30000 \# jetty api dosfilter maxidletrackerms = 30000 \# jetty api dosfilter maxwaitms = 50 \# jetty api dosfilter throttledrequests = 5 \# jetty api dosfilter insertheaders = true \# jetty api dosfilter tracksessions = false \# jetty api dosfilter remoteport = false \# jetty api dosfilter ipwhitelist = 127 0 0 1,localhost \# jetty api dosfilter managedattr = true \## jetty passthrough parameters for api responses gzip compression \# jetty api gzipfilter = on \# jetty api gzipfilter mingzipsize = 1024 \## hosts or subnets from which to allow http/json api requests, if enabled \## list delimited by ';', ipv4/ipv6 possible, default localhost \# api allowed = 127 0 0 1;localhost;\[0 0 0 0 0 0 0 1]; \## key list to access the admin api requests, uncomment and replace with your own keys \## delimited by ';' if more than one key should be available \# api adminkeylist = e673529588638d2129af1e0528a1642cf2e0c180 \## does the api accept additional/redundant parameters in an api call? \## default is no (wallet accepts only params specified for given call) \## enable this if you have a sloppy client interacting, but please be aware that this \## can be a security risk \# api acceptsurplusparams = no \## host interface on which to listen for http/json api request, default localhost only \## set to 0 0 0 0 to allow the api server to accept requests from all network interfaces \# api listen = 127 0 0 1 \## list of cors allowed origins \# api allowedorigins= \## port for http/json api requests \# api port = 8125 \## websocket json event emission \# available under ws\ //localhost 8126/events (or configured port) \# api websocketenable = true \## port for websocket/json api events \# api websocketport = 8126 \## the heartbeat interval in seconds that indicates a working connection \# api websocketheartbeatinterval = 30 \## idle timeout for http/json api request connections, milliseconds \# api serveridletimeout = 60000 \## directory with html and javascript files for the new client ui, and admin tools utilizing \## the http/json api \# api ui dir = html/ui \## set the documentation mode use one of \[ modern, legacy, off ] to enable or even disable the api doc ui \# api docmode = modern \## enable ssl for the api server (also need to set api ssl keystorepath and api ssl keystorepassword) \# api ssl = off \## enforce requests that require post to only be accepted when submitted as post \# api serverenforcepost = yes \## your keystore file and password, required if uissl or apissl are enabled \# api ssl keystorepath = keystore \# api ssl keystorepassword = password \## if you use https //certbot eff org/ to issue your certificate, provide below the path for your keys \## brs will automatically create the keystore file using the password above and will reload it weekly \## make sure you configure certbot to renew your certificate automatically so you don't need to worry about it \# api ssl letsencryptpath = /etc/letsencrypt/live/yourdomain com \#### database #### \## enable trimming of derived objects tables \# db trimderivedtables = on \## if trimming enabled, maintain enough previous height records to allow rollback of at least \## that many blocks must be at least 1440 to allow normal fork resolution after increasing \## this value, a full re scan needs to be done in order for previously trimmed records to be \## re created and preserved \# db maxrollback = 1440 \## database default lock timeout in seconds \# db locktimeout = 60 \## maximum number of rows to be inserted in a single sql statement \## defaults to 10000, should be ok in most situations \## may be fine tuned according to your dbms or to your machine performances \## warning a high value (> 15000 rows) is known to generate queries too big for an sqlite backend \# db insertbatchmaxsize = 10000 \### gpu acceleration \## enable gpu acceleration \# gpu acceleration = off \# gpu autodetect = on \## if gpu auto detection is off (gpu autodetect = off), you must specify manually which one to use \# gpu platformidx = 0 \# gpu deviceidx = 0 \## gpu memory usage in percent and how many hashes to process in one batch \# gpu mempercent = 50 \# gpu hashesperbatch = 1000 \## number of unverified transactions in cache before gpu verification starts \# gpu unverifiedqueue = 1000 \## uncomment this to limit the number of cpu cores the wallet sees default is half available \# cpu numcores = 4 \#### mining #### \## list of semicolon separated passphrases to use when solo mining when mining solo, if you enter your passphrase here, \## you can set your miner to pool mining mode and avoid sending your passphrase over the wire constantly \## do not use on public facing nodes or nodes that are accessible (filesystem or api server) by others, as it could \## cause your passphrase to become compromised or allow others to mine on your behalf without your knowledge \# solominingpassphrases=passphrase1;passphrase2;passphrase3; \## list of semicolon separated passphrases to use when solo mining but with a reward recipient set \## your miner account is the one you provide only the id, while the account which you set \## your reward recipient is the one you provide the passphrase here \# rewardrecipientpassphrases=id1\ passphrase1;id2\ passphrase2;id3\ passphrase3; \## allow anyone to use the "submitnonce" api call this call can be abused to force your node to perform lots \## of work in order to effectively mine for others enabling this option will only allow accounts whose passphrases \## are in solominingpassphrases to mine through this node \# allowothersolominers=false \#### development #### \## (proceed with extreme caution beyond this point) \## run with a different network \# testnet network \# node network = signum net testnetnetwork \# mock mining (offline and accepting any nonce as valid) \# node network = signum net mocknetwork \## enter a version upon exit, print a list of peers having this version \# dev dumppeersversion = \## force re build of derived objects tables at start \# dev forcescan = off \### debugging (part of development isn't it) \## used for debugging peer to peer communications \# brs communicationloggingmask = 0 \## track balances of the following accounts and related events for debugging purposes \# brs debugtraceaccounts= \## file name for logging tracked account balances \# brs debugtracelog = log accountbalances trace csv \## separator character for trace log (default '\t' tab) \# brs debugtraceseparator = \## quote character for trace log (default " double quote) \# brs debugtracequote = \## log changes to unconfirmed balances \# brs debuglogunconfirmed = false \## timeout in seconds to wait for a graceful shutdown \# node shutdowntimeout = 180 \## enable the indirect incoming tracker service this allows you to see transactions where you are paid \## but are not the direct recipient eg multi outs \# node indirectincomingservice enable = true \## auto pop off means that the node will, when failing to push a block received whilst syncing (from another \## peer), pop off n 1 blocks, where n is the number of failures to push a block at this height \## this, combined with blacklisting, should significantly lower the chance of your node becoming stuck, \## whilst syncing or when operating normally \# node autopopoff enable = true