I recently started mining eth on my 5700 bios flashed to XT, mining on Nanopool. Getting around 48MH/s but i saw a method by Red Panda Mining which can increase my MH/s by around 10 giving me 60MH/s but a high ratio of incorrect shares using a config edit on Phoenix Miner. Im wondering if this is worth the increase in hashrate but ultimately still leads me to incorrect shares. My incorrect share rate is 2:1 using this edit. Thoughts?
I am running a few old miners (T9+) to heat my home and they are too loud and making a loss (which is ok, as I am interested in the heat). Following this suggestions (sticking with the standard firmware) I already reduced the fan-speed by adding the following entries to
"bitmain-fan-ctrl": true, "bitmain-fan-pwm": "66",
This resulted in an increased temperature of the chips (still below 100 deg C) as can be seen here:
coming from here (below 80 deg C):
Since I am still not too happy with the noise level (reduction from 75db to 60db) thus I am considering to under clock the miners.
I believe that the frequency and Hashrate should follow a linear / proportional law meaning that reducing the frequency by 20% from 550 to 440 should result in a reduced hash rate by 20% from 10.5 to 8.4 TH/sec. Can someone please confirm this?
I am actually more interested in what is the relationship to power consumption. I am currently mining at a loss meaning I can’t afford the same energy cost at a lower hash rate. I read that overclocking is not economic as the energy consumption does not grow linear.
- Would that even hold true in the other direction?
- Could the mining cost even break even (neglicting hardware cost) if I would reduce the frequency by 50 or more percent?
- Where can I find a plot to measure this effect or how could I create one? (I am not a electrician)
Stack Exchange Network
Stack Exchange network consists of 177 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
I know that a block is validated if the hash of the block header is below the difficulty target.
But since I've never run a Bitcoin node (soon), I had some questions about the mining process.
1) When miners run the BTC software, by default they confirm transactions in the storage pool based on the highest fees or can they select which transactions they are allowed to validate?
2) If you have a higher hash rate, can you validate transactions faster?
3) If a block is validated on average every 10 minutes as specified by the BTC protocol, will the miners also attempt to validate transactions?
4) How are transactions validated?
Hello, I was wondering if anyone had any idea what the hash rates look like on recently released PCs. I'm looking for numbers for an average user if he wanted to use his PC for my own, and then sort of a best-case scenario if you have the hash rate of someone who has a good video card sources or information out personal experiences would be appreciated!
What is the last required hashrate to mine a bitcoin block?
How much power do you need to break a block in one day?
Okay, I cloned a Dash source (learning)
Compile well under Linux / Win, I have peers and at the first start blocks were good. Mined 200 blocks, all peers synchronized well, Tx sent each other. Everything works perfectly.
I stopped the mining because there was no need to continue mining (just to learn), but I've found that it has not been synced ever since. Constantly showing "catch up" What I've tried:
- Scan again
- Tried another machine
- Let it sit for hours.
Nothing moves synchronously and no blocks are loaded after re-indexing (stays at zero). I figured it had something to do with the fact that there was no hashrate, but I can not help it because the wallet is out of sync (I assume). The mistake I am trying to get is
json_rpc_call failed with cpuminer
setgenerate only shows 0 hash
The number of hashes a miner can perform per second. What you seem to refer to is Network hash power: the computing power of all miners in the network. This was not assumed to be constant, but the block time should be nearly constant. Therefore, the reason for adjusting the difficulty is determined depending on how often blocks are dismantled.
Estimate the network hash performance
How do you rate the hash performance of the entire network without knowing all the miners and hardware details and how long they are running? You may see the rate at which shares are sent to pools (and their difficulty) if you can access that data. More likely, you estimate the value based on the current difficulty and the rate at which blocks are resolved. This can be calculated based on the likelihood that a hash at the current destination will be determined by random estimation. It then determines how many estimates per second (hashrates) are required to hash every 10 minutes:
Hashrate = (1 / P) / block time, from where
Pis the probability of guessing a hash that solves the block at the current target
blocking timeis block time in seconds (600 seconds for Bitcoin)
Which distribution follows the Hashrate?
Here are some diagrams:
You can see that the difficulty follows the network hash rate. As the hashrates increase over time with increasing user acceptance (miners coming), the difficulty increases to maintain a block time of 10 minutes.
A short story
There is an Antminer server farm for which I provide support over 100 years Servers – mainly S9 and T9 models. There are three specific S9 miners side-by-side on the rack that are consistently below the normal hash rate in Slushpool.
When I check these servers in Slushpool, all three often fluctuate between 11 and 13 TH / s, but the web interface and the command line RT and AVG hash rates are always normal and have no fluctuation. Every time I check Slushpool, it's almost always lower than normal and I've checked almost a hundred times in one day just to make sure I was thorough.
Is there anything in common or something that has determined someone else that could cause a Slushpool Minor Hash Rate to appear lower and lower than on the local server?
I checked both the command line and the web interface for all these issues and saw the same thing. normal real time and normal average always
Every time I look at Slushpool, it almost always shows that it is
Hash lower by 1 or 2 TH / s.
Normally, other miners in this farm have a lower slush pool hashrate than the normal ones, the local servers have a value much closer to that hashrate, so I am
I try to understand if there is something that I do not understand here
what that can explain.