Groups and Pools
Understanding the needs
When dealing with stratum connections, luxminer has two main constructs: groups and pools. To make it easier to understand the reason behind their existence, it is helpful to think about the two common mining scenarios:
- The "failover" scenario: It is common for individual miners to "always" mine in a single, preferred pool. Many, however, also have a secondary (and maybe a tertiary) "backup" pool to use when the main one is in maintenance, blocked or somehow inaccessible in any way. In this case, once the primary pool is not accessible, we want to go to the secondary, and so on, until we can connect to someone. We call this failover.
- The "split" scenario: In this scenario, the miner also have multiple pools, but they want to work in more than one pool at the same time. This means multiple connections, to different servers, possible with a specific ratio (say, one 1:1, 1:2, etc.). This is called hashrate splitting, because no single pool is receiving the "full" hashrate; it is being divided (again, according to a specific ratio) between multiple connections at the same time.
Note that those scenarios are not exclusive; a very common need is to have both hashrate splitting and failover pools at the same time. The question now is: how to deal with this complexity?
The solution
The solution for handling both scenarios is to divide pools into groups. But first, what is a group? For luxminer, a group is a named1 list of pools, with a specific quota value. Each group has one or more pools2, which are tried from top to bottom when establishing a stratum connection. Once one of them is successful, it is considered active and their jobs will be processed normally. Only one pool of a group may be active at a given time.
For the simple failover scenario, a single group is enough, since they are exactly what we need: a list of pools, with a given priority (top to bottom), where once a connection fails too many times, the next one is tried.
The hashrate splitting is solved by more than one group defined. Since we will have one connection per group, this means that the hashrate will be "split" between the groups based on the defined group quota. Of course, you can define multiple groups if you want more splits. Having failover pools for each split is just a matter of putting more than one pool in each group.
Configuration
Now that we already know the "why" of groups and pools, lets see the "how" for manually configuring them in your luxminer.toml
. The first thing to do is to define a pool:
[[group]]
name = "My Group"
quota = 1.0
This creates the My Group
group, with quota of 1.0
(this is the value if you don't explicitly define a quota). But a group is useless unless we define some pools on it:
[[group.pool]]
url = "stratum+tcp://btc.global.luxor.tech:700"
user = "account.worker"
[[group.pool]]
url = "stratum+tcp://my.secondary.pool:700"
user = "account.worker"
This creates two pools, in order for the previously defined group. In the above example, we will connect to btc.global.luxor.tech
and, if it fails too many times, we will move to my.secondary.pool
(the failover pool). How about defining a hashrate split? Just add a new group with at least one pool:
[[group]]
name = "My other group"
quota = 1.0
[[group.pool]]
url = "stratum+tcp://my.other.pool:700"
user = "account.worker"
This creates another group, this time with just one pool (we could add more, to create a failover scenario for this group too if we want). Since we now have two groups with quota of 1.0
, this means that the hashrate will be split evenly between both pools (see How quotas work for details).
Besides what we've seen so far, groups and pools also have extra keys that you might find handy:
- If your pool needs it, you can specify the
password
key for your pool. - You can disable a pool by using the
enable
key. - You can specify a
user
andpassword
in the group level. If you do so, the values will be automatically replicated (unless overridden, of course) on the pools of that group. This might be convenient if you have a long list of pools with the same username.
Last, but not least, you can change group and pool configuration programmatically, via API. If you want to go on that route, look at the addpool, removepool, enablepool/disablepool, addgroup, removegroup and groupquota commands for further information.
How quotas work
In the TOML file, the quota can be define as any floating point number, that will then be translated, in runtime, to the correct ratio between multiple groups. But how this conversion works? The "real" quota of a given group can be defined as:
That is, the "real" quota is the quota defined in the TOML file, divided by the sum of all defined quotas in the TOML file. An example will make this clearer: suppose we have 3 groups, G1
, G2
and G3
with the quotas as 1.0
, 3.0
and 1.0
, respectively. If we apply the previous formula, we get this result:
You can think of those results as a percentage, meaning that groups G1
and G3
will have 20% of the hashrate each, while the remaining 60% goes to G2
. If you prefer to use straight percentage values, go for it. The math is the same.
If, for some reason, one of the groups go "dark" (for example, have all the pools disabled), the hashrate will be reverted to the remaining groups. If/when it comes back online again, it will try to balance the work between the groups until the desired ratio is matched as closely as possible.