crowetic (OP)
Legendary
Offline
Activity: 2282
Merit: 1072
https://crowetic.com | https://qortal.org
|
|
May 07, 2016, 10:04:07 PM |
|
I just wanted to pop in and say, I'm very excited for the releases we're planning within the next 2 months. Everyone will be VERY happy to be part of BURST. Thank you for your support, BURST community, we're about to make you VERY happy.
|
▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ ▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ ▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ ▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓▓▓ ▒▓▓▓▓▓▓▓▓▓▓▒ ▒▓▓▓▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓▒ ▒▓▓▓▓▓▓▓▓▓▓ ▒▓▓▓▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓▒ ▒▓▓▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓▒ ▓▓▓▓▓▓▓▓▓▓ ▒▓▓▓▓▓▓▓▓▓▓ ▒▓▓▓▓▓▒ ▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓ ▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓▓ ▒▒▒▒▒▒ ▓▓▓▓▓▓▒ ▓▓▓ ▓▓▓▓▓▓▓▓▓▓▓ ▒▓▓▓▓▓▓ ▒▓▓▓▓▓▓ ▒▓▓▓▓▓▓▓▓▓▓▓▒ ▓▓▓▓▓▓▓ ▒▓▓▓▓▓▓ ▒▓▓▓▓▓▓▓▓▓▓▓▓▓ ▓▓▓▓▓▓▒ ▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒ ▓▓▓▓▓▓▒ ▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ ▒▓▓▓▓▓▓▒ ▒▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ ▒▓▓▓▓▓▓ ▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓ ▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ ▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓
| ORTAL
| .⊙.Web and Application hosting. ⊙ decentralized infrastructure .⊙.leveling and voting.
| Founder/current dev group facilitator |
[/td][/tr][/table] [/table]
|
|
|
yellowduck2
|
|
May 08, 2016, 01:25:49 AM |
|
I just wanted to pop in and say, I'm very excited for the releases we're planning within the next 2 months. Everyone will be VERY happy to be part of BURST. Thank you for your support, BURST community, we're about to make you VERY happy. Looking forward to burst 2.0
|
|
|
|
Nott
Newbie
Offline
Activity: 33
Merit: 0
|
|
May 08, 2016, 01:28:58 AM |
|
Very nice 2.0 coming up!
|
|
|
|
Turn0ff
|
|
May 08, 2016, 07:06:58 AM |
|
I just wanted to pop in and say, I'm very excited for the releases we're planning within the next 2 months. Everyone will be VERY happy to be part of BURST. Thank you for your support, BURST community, we're about to make you VERY happy. Looking forward to burst 2.0 If the coming releases are so massive as indicated (although I'm a bit puzzled by the reason to point this out), BURST 2.0 might be very useful in promotional work. Talking of PR,which CF is the one to support...? It'd be a disaster if it becomes an ACCT failiure... Also, PR is expensive, the community must take resposibility to get proper means for the Team.
|
|
|
|
crowetic (OP)
Legendary
Offline
Activity: 2282
Merit: 1072
https://crowetic.com | https://qortal.org
|
|
May 08, 2016, 04:23:27 PM |
|
I just wanted to pop in and say, I'm very excited for the releases we're planning within the next 2 months. Everyone will be VERY happy to be part of BURST. Thank you for your support, BURST community, we're about to make you VERY happy. Looking forward to burst 2.0 If the coming releases are so massive as indicated (although I'm a bit puzzled by the reason to point this out), BURST 2.0 might be very useful in promotional work. Talking of PR,which CF is the one to support...? It'd be a disaster if it becomes an ACCT failiure... Also, PR is expensive, the community must take resposibility to get proper means for the Team. I've already paid the down payment for a company to start with our first real outreach of PR on Youtube, they're starting the base foundations of it now. As I said, only reason I mentioned it was because I'm seeing all the development on it and I'm excited about it. That's it. Also, if you'd like to continue doing your twitter campaigns I will provide you with whatever it is you require to do your work better. The release of the phone mining will be big as well, and we're planning the PR for that too. The BURST development AT in the CF page with more funding is the 'right' one, I accidentally started two of them because my computer had issues and I didn't know that the first one actually took. So they're both 'real', but the one with more funding in it is the one we should all shoot for as it has better likelihood of success... Though, if both of them got fulfilled that would be nice. I am also starting my reaching out to bigger investors in order to get things rolling faster. Though, I do like the fact that our own CF platform is sustaining things as well. That's a really great selling point. Don't take my being excited as an attempt to do anything that I wouldn't do, just know that I'm excited seeing everything that is going on, and I'm happy that I get to be the one to soon share it with you all.
|
▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ ▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ ▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ ▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓▓▓ ▒▓▓▓▓▓▓▓▓▓▓▒ ▒▓▓▓▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓▒ ▒▓▓▓▓▓▓▓▓▓▓ ▒▓▓▓▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓▒ ▒▓▓▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓▒ ▓▓▓▓▓▓▓▓▓▓ ▒▓▓▓▓▓▓▓▓▓▓ ▒▓▓▓▓▓▒ ▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓ ▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓▓ ▒▒▒▒▒▒ ▓▓▓▓▓▓▒ ▓▓▓ ▓▓▓▓▓▓▓▓▓▓▓ ▒▓▓▓▓▓▓ ▒▓▓▓▓▓▓ ▒▓▓▓▓▓▓▓▓▓▓▓▒ ▓▓▓▓▓▓▓ ▒▓▓▓▓▓▓ ▒▓▓▓▓▓▓▓▓▓▓▓▓▓ ▓▓▓▓▓▓▒ ▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒ ▓▓▓▓▓▓▒ ▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ ▒▓▓▓▓▓▓▒ ▒▓▓▓▓▓▓ ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ ▒▓▓▓▓▓▓ ▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ ▓▓▓▓▓▓▓ ▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ ▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓
| ORTAL
| .⊙.Web and Application hosting. ⊙ decentralized infrastructure .⊙.leveling and voting.
| Founder/current dev group facilitator |
[/td][/tr][/table] [/table]
|
|
|
Turn0ff
|
|
May 08, 2016, 09:19:16 PM |
|
I know that your intentions are the very best; I'd never question that! I just took a couple of steps back and asked myself what a newbie to BURST might think. I think they can react in at least two way, were one is not so good (P&D). Others will be excited and join us. Sometime one needs to take a few steps back; perhaps things are fine, perhaps tgey need a little adjustment. I believe in headsup! Or I'm just irritated that I'd to sell some, and want those babies back badly before things are rolled out The Twitter campaign will continue, I just had a lot irl. daWallet and KarlPerkins have already helped me out a great deal. Would I run out of ideas, perhaps we can do a little brainstrorm. So far 600 followers with a clear interest in BURST, that's pretty nice for little than a months! What I'd be helpful is a heads up ( )when news are approching to prepar the spreading of news.
|
|
|
|
Cobra98
Member
Offline
Activity: 70
Merit: 10
|
|
May 08, 2016, 10:21:12 PM |
|
Got another mining question for the experts!
Is there an advantage or disadvantage to plotting small chunks on larger drives versus one large plot taking up the whole drive?
I have a couple 2TB drives but only have 1 large plot on each taking up the whole drive but unable to use the plot optimizer due to no space now that I found that it increases speed by 15-20mb/s. Is there any speed or deadline difference if u have a 2TB plot versus 2 1TB's or even 4 500gb's using a single drive? I'd like to ideally have 1 drive just to plot say 500gb plots then optimize them and move them to larger drives filling them up with the smaller chunks of plots.
Hope my question makes sense and someone knows the answer and can help me squeeze the most out of my small 8tb setup, thanks!
|
|
|
|
pinballdude
|
|
May 08, 2016, 11:15:02 PM |
|
Got another mining question for the experts!
Is there an advantage or disadvantage to plotting small chunks on larger drives versus one large plot taking up the whole drive?
I have a couple 2TB drives but only have 1 large plot on each taking up the whole drive but unable to use the plot optimizer due to no space now that I found that it increases speed by 15-20mb/s. Is there any speed or deadline difference if u have a 2TB plot versus 2 1TB's or even 4 500gb's using a single drive? I'd like to ideally have 1 drive just to plot say 500gb plots then optimize them and move them to larger drives filling them up with the smaller chunks of plots.
Hope my question makes sense and someone knows the answer and can help me squeeze the most out of my small 8tb setup, thanks!
the advantage with the larger file is that it probably is the fastest option. More files will be a tiny bit slower, but as long as it's only like 30 files or so, it should barely be measureable, overhead is just the opening and closing of more files in the operating system. The advantages of smaller files are several. with smaller files, a disk error that make a bit unreadable will potentially make the entire huge file bad for reading.. With smaller files, you could rename the file with the "hole" in it, and keep mining the rest until the disk finally breaks down for good also, with smaller files, it is easier to rearrange things. I have some drives that has both burst stuff and other stuff, and when other stuff needs more space, i just move a 250GB burst file to some other drive with more space. If i had made files in teraby sizes, it would be harder to adjust to other needs. Now i can increase or decrease the burt usage of any drive in 250GB steps (as most of my files are that size) i create batch files to create my plot files, and keep a spreadsheet to keep track of what is where. With more files you will have more administration, keeping track of what nonces you have plotted and where they are. ( don't remember how many drives i have with burst plots, but it is probably 20 to 30 drives on 3 pcs - the spreadsheet remembers what and where for me )
|
|
|
|
Cobra98
Member
Offline
Activity: 70
Merit: 10
|
|
May 09, 2016, 01:13:41 AM |
|
Thanks pinballdude! I'm thinking of going back through my drives and plotting 250 or 500 gig plots to help make things more manageable and able to use the plot optimizer. Maybe 250's since I have smaller drives idk. Do u start from 0 when when plotting ur first nonce and each plot after starts on the next number after the end nonce or are your plot ranges all over the place as long as they don't overlap?
I guess I'm ocd in that my first plot is 0 thru A, the next is A+1 thru B, next is B+1 thru C and so on in that no nonce gets missed. Not sure if that really matters or makes a difference in speed or deadline finding but its just the way I've done them since day 1 of my burst mining.
|
|
|
|
Turn0ff
|
|
May 09, 2016, 02:55:42 AM Last edit: May 09, 2016, 03:06:24 AM by Turn0ff |
|
Two articles of some relevance: Heat doesn`t kill your hard drives. Humidity doesMay 5, 2016 by John HarrisFULLTEXT: http://www.remosoftware.com/info/heat-doesnt-kill-hard-drives-humidityDoes heat kill your hard drive? Most of us think, temperature impact life span of hard drives more than anything.
This has been a persistent myth for years within the storage industry. Until a team of researchers from Rutgers University, GoDaddy and Microsoft shatter the myth.
The researcher team led by Ioannis Manousakis and Thu D. Nguyen, of Rutgers, Sriram Sankar of GoDaddy, and Gregg McKnight and Ricardo Bianchini of Microsoft studied the impact of temperature and humidity variations on hardware reliability in datacenters.
I will quickly summarize the overview of the study:
A recent study estimates that data centers consumes roughly 2% and 1.5% of the electricity in the U.S and worldwide, typically a single hyper scale accounts more than 30MW.
To reduce data center energy consumption, the techniques involved increasing the hardware operating temperature and reducing the need of cooling air inside data center.
Although lowered cooling cost seems clear win but this may have severe consequences like decreased hardware reliability.
None of the prior research have yet addressed the tradeoffs between cooling energy, datacenter environmental conditions, hardware component reliability, and overall costs in modern free-cooled datacenter.
Environmental Conditions and Free cooled Reliability in Free-cooled Datacenters clears the picture of environmental conditions impact on reliability of hardware.
The study was carried out at 9 Microsoft datacenters around the world for 1.5 years to 4 years and data were collected.
Based on the data they derive a new model of disk lifetime as a function of both temperature and relative humidity.
Then the researchers quantify parameters between energy consumption, environmental conditions, component reliability, and costs.
Their Key Findings Include:
1. On average 89% of component failures account for disks, regardless of the environmental conditions. 2. Relative humidity have a much stronger impact on disk failures than absolute temperature in current data center operating conditions. 3. Temperature variations and relative humidity variation are negatively correlated with disk failures 4. Disk failures rates increase significantly during periods high relative humidity 5. Disk controller/ connectivity failures increase significantly when operating at high relative humidity 6 Server designs that place disks in the back of enclosures can reduce the disk failure rate significantly (in high relative humidity datacenters) 7. Employing software techniques to mask disks significantly reduces infrastructure and energy costs.
|
|
|
|
GenPop
Newbie
Offline
Activity: 9
Merit: 0
|
|
May 09, 2016, 02:57:11 AM |
|
Very nice 2.0 coming up!
I'd have to agree, coolest idea since.
|
|
|
|
Turn0ff
|
|
May 09, 2016, 03:05:12 AM |
|
Second article on the same topic: Humidity rather than heat is the number one enemy of the hard diskBy Darren Allan March 31, 2016Heat isn't the biggest enemy for the humble hard disk, rather humidity is what causes the most failures, a new piece of research has observed.
The study, carried out by Rutgers University and entitled 'Environmental Conditions and Disk Reliability in Free-cool Data Centres', found that the most negative effects on drive controllers and adapters were felt when humidity levels increased.
As Network World reports, the testing took place in Microsoft data centres and encompassed over a million hard drives over a period of several years, and unsurprisingly found that the vast majority of hardware failures in the data centres – 89% of them – were disk failures. Clear difference
And as the humidity level rises, hard disk rises, failures increase to such an extent that the study authors noted you could easily tell which data centres had humidity controls, as those which didn't showed up clearly when they looked at the annualised failure rate of controllers.
Humidity is such a danger that researchers found that positioning drives in the "hot region at the back of the server" actually improved the reliability level of the drives, because the heat kept humidity at bay – and the heat is clearly the lesser of two evils.
Whether the cost of advanced humidity controls for a data centre is worth it compared to what you'd fork out replacing the extra failed disks is another matter – although that also depends on how long-term you're looking.
Last month, we saw some research from Google on the reliability of the hard disk's big rival, the SSD. That also turned up an interesting finding, namely that it wasn't the amount of usage the SSD had seen which correlated with high error rates, but simply the age of the drive.
In other words, heavy usage isn't the big demon which it used to be, and today's SSDs cope far better with heftier workloads. FULLTEXT: http://www.techradar.com/news/computing-components/storage/humidity-rather-than-heat-is-the-number-one-enemy-of-the-hard-disk-1318085BURST handles summer time fine, and scrypt will have their regular problem The two articles are based on an academic paper, "Environmental Conditions and Disk Reliability in Free-cooled Datacenters", presented earlier this year. FULLTEXT: http://0b4af6cdc2f0c5998459-c0245c5c937c5dedcca3f1764ecc9b2f.r43.cf2.rackcdn.com/23104-fast16-papers-manousakis.pdfJust a comment: I see some PR oppertunies here
|
|
|
|
riskyfire
|
|
May 10, 2016, 02:05:52 PM |
|
ByteEMini Dividends Paid Out I'm pleased to report our monthly dividend has been paid as normal making this our 12th regular payout. Total dividends for the year 0.0750687 per share, a return of 18.7% on the initial offer price. Thanks for your support & also to Crowetic for the ByteEnt asset on which this asset is based. Name: ByteEMini Asset ID: 639537212154320 Offer Price: 0.4 Total Shares: 100,000 Dividends: Monthly 10/05/16 - Dividend paid 0.00756 per share Amount paid, Account, TX 245.31444, BURST-PMVC-387U-LG6C-FUMVY, 7160713852089312488 218.484, BURST-XC7A-FM25-3TK9-CNPGC, 15655097424831016152 151.2, BURST-KTTD-6GE6-6BK9-B8ABB, 404704383994532551 66.13488, BURST-54J2-DUDF-EUQU-4ZZRQ, 2376322915910760786 34.02, BURST-53DP-A7CT-YTM9-FZCLM, 14096384036515336296 23.436, BURST-7G43-CTSC-QLTM-GASD4, 9316693045737017709 13.23, BURST-B2R8-TZKJ-T4NG-4AJBW, 3832531083452965023 3.78, BURST-78S3-3AXW-SDDG-37Y72, 9470940232602752490 0.39312, BURST-YCZP-KEA3-D2WP-CYDSP, 4670589994539798161 0.00756, BURST-GGPX-PP5S-LAGP-CBGCU, 17043839847802145202 ------------All transactions processed------------
|
|
|
|
Turn0ff
|
|
May 11, 2016, 12:40:23 AM |
|
Plotting again...
|
|
|
|
notabeliever
|
|
May 11, 2016, 01:37:04 AM Last edit: May 11, 2016, 01:55:05 AM by notabeliever |
|
Plotting again... Yeah same here. Suggestions for streamlining the process when you have name/size mismatch then you fix it and you get wrong stagger. Looking for a formula that would be in excel/Libre and would just require our input of HD bytes divided by 262,144 then the working stagger size? 689 gb is 740,397,670,400 bytes / 262144 = 2824392.96875 nonce change to 2824392 now stagger size is missing? ( blank +-/ blank = stagger size) current size 10240 is wrong stagger size. overlap tool says this "2824392_10240: Number of nonces (2824392) is not a multiple of stagger size (10240)" best formula to use? and 685 gb is 735,513,149,440 bytes / 262144 = 2805760 nonce now how to find missing stagger size ( blank +-/ blank = stagger size) and 399 gb is 428,681,199,616 bytes / 262144 = 1635289 nonce trying to find working stagger
|
|
|
|
haitch
|
|
May 11, 2016, 02:14:09 AM |
|
Plotting again... Yeah same here. Suggestions for streamlining the process when you have name/size mismatch then you fix it and you get wrong stagger. Looking for a formula that would be in excel/Libre and would just require our input of HD bytes divided by 262,144 then the working stagger size? 689 gb is 740,397,670,400 bytes / 262144 = 2824392.96875 nonce change to 2824392 now stagger size is missing? ( blank +-/ blank = stagger size) current size 10240 is wrong stagger size. overlap tool says this "2824392_10240: Number of nonces (2824392) is not a multiple of stagger size (10240)" best formula to use? and 685 gb is 735,513,149,440 bytes / 262144 = 2805760 nonce now how to find missing stagger size ( blank +-/ blank = stagger size) and 399 gb is 428,681,199,616 bytes / 262144 = 1635289 nonce trying to find working stagger If you can, use the gpu plotter in direct mode, it'll create the perfect stagger - fully optimized plots. Otherwise, plotsize = ( initial-plotsize div desired-stagger) * desired-stagger So: in your example 2,824,392 div 10240 = 275. 275 * 10240 = 2,816,000 nonces. So going from number of bytes: num-nonces = ((num-bytes div 262144) div desired-staggered) * desired-stagger
|
███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ ███ █████████████████████████ ███ ███ ███ ███ ███ ███ | IRELINE |
██████ ██████ ██████ ██████ ██████ ██████ ██████ ██████
| Largest Fund worldwide for distributed application makers ███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ ███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ wireline.io - facebook.com/wirelineio - @wirelineio |
██████ ██████ ██████ ██████ ██████ ██████ ██████ ██████
| ●⚫⦁ ICO ⦁⚫● September 1 |
|
|
|
Blago
|
|
May 11, 2016, 02:49:11 AM Last edit: May 11, 2016, 03:13:58 AM by Blago |
|
Plotting again... Yeah same here. Suggestions for streamlining the process when you have name/size mismatch then you fix it and you get wrong stagger. Looking for a formula that would be in excel/Libre and would just require our input of HD bytes divided by 262,144 then the working stagger size? 689 gb is 740,397,670,400 bytes / 262144 = 2824392.96875 nonce change to 2824392 now stagger size is missing? ( blank +-/ blank = stagger size) current size 10240 is wrong stagger size. overlap tool says this "2824392_10240: Number of nonces (2824392) is not a multiple of stagger size (10240)" best formula to use? and 685 gb is 735,513,149,440 bytes / 262144 = 2805760 nonce now how to find missing stagger size ( blank +-/ blank = stagger size) and 399 gb is 428,681,199,616 bytes / 262144 = 1635289 nonce trying to find working stagger if stagger = 10240 formula (Libre/Open) "=ROUNDDOWN(X/stagger)*stagger" formula (Excel) "=ROUNDDOWN(X/stagger, 0)*stagger" so, if X=2824392 nonces=>2816000 X=2805760 nonces=>2805760 X=1635289 nonces=>1628160
|
Relax, I’m russian!... BURST-B2LU-SGCZ-NYVS-HZEPK
|
|
|
|
notabeliever
|
|
May 11, 2016, 04:24:04 AM |
|
Haitch and Blago that you that helps us.
|
|
|
|
vaxman
Member
Offline
Activity: 99
Merit: 10
|
|
May 11, 2016, 07:56:17 AM |
|
yes, for more than a week. The owner/donation account is BURST-ZKU3-PKQ3-YHBS-FBDD9
|
|
|
|
|