Lets clear some things up.
Power was on at the facility from Hour #1. Comcast was not installed untill 5 days later. We still had internet and were mining during that time- just not on comcast.
What we didnt have untill last week was the UPGRADED power, IE the additional pulls.
For the move itself we were down no more than 72 hours. Anything beyond that was equipment problems, as in most of the jupiters and a few of the ants did not like the move for whatever reason.
Not to mention that picture- was taken before the move, before the coop's equipment was packed up. You cant even see the 3/4 of the place that is finished.
I would LOVE for some things to be cleared up. How about full disclosure and honest transparency as a start?
(link to picture removed to save space)
Bout 2 weeks ago, very much under construction.
And yes, that is a quakecon sign
January 15th less 2 weeks =January 1st, 2 days after the move.
When you were asked what was in the back corner, you yourself commented "KnC, Redhash, and some Ants"
So if 3/4 of the facility is done and complete at the time of this picture, why wouldn't or couldn't you show us that? Maybe because you didn't share virtually anything with us? The silence was deafening.
Lol! But I see some actual mining gear in the back, Knc maybe?
KnC, Redhash, and some Ants
On Jan 2 (+72 hours) Mapuao posted
Worker 1 Hash Rate 0 Gh/s
Worker 2 Hash Rate 0 Gh/s
Worker 3 Hash Rate 0 Gh/s
Worker 4 Hash Rate 0 Gh/s
Whats going on?
Clear answer, PLEASE!
On Jan 3 (+96 hours) madpoet posted
1 of the 4 online. This is just nuts. I do this for a living and would be fired if I left my clients down like this.
On Jan 3 (+96 hours - 4 days) you tell us:
Updates really quick:
We have now moved from our 2k facility, to a brand new 7k facility.
Reasons for delays in getting hash rate up:
-No power till wednesday, due to storms and holiday
-New facility STILL does not have internet, have been using 4G LTE backup until ISP finishes install
-One of the existing breaker boxes in the facility blew completely the first night, electricians are now replacing and massively upgrading the power and cooling here as I type this
-Delays in parts, the storms + 2 holidays caused ~50% of my parts for setup to be delayed, massively increasing my time needed to get things up and running
- Half of the jupiters stopped working from unplug-repluf in 14 hours later. Every single Antminer I have, as well as the avalons and BF had no problems, but I now have ~8 jupiters to diagnose.
As of now, R15/16_2 and R15/16_4 are running at full speed, with the next 2 being worked on right now.
I am very sorry for all of this lost hash rate, but I have done everything within my limited power to get this stuff up and running ASAP. I still don't have a bed, or internet at my apartment, so even communication has been difficult. I expect to have everything up and running 100% by the end of tomorrow, if not by sunday.
Also if I get one more fucking 502 error while trying to post ill kill something.
On January 4th (+120 hours = 5 days) All miners were at 0
Epic FAIL
Then 2 miners started working sporadically for quite a few days.
Oj January 6 (7 days) Thomas posts that you are still yet moving equipment to a new breaker and scolds us for asking for updates. We have been down 7 days at this point, a solid !@#$%^& week and he tells us "On another note we do have other things we need to get done than post a second by second breakdown of what's going on"
Did we EVER ask for second by second updates? We are 7 days into an outage with virtually NO information and we are basically told to quit asking!
On January 6 Thomas posts
I would venture to guess that worker 2 is dead. Either that or my F5 voodoo is failing.
Its the breaker that's being installed, power needs to be split up as to not blow everything.
I thought that the "new power" was already in and "upgraded"? Yet you still maintain: "For the move itself we were down no more than 72 hours. Anything beyond that was equipment problems," At day 7 you were still messing with power, that is not an equipment problem.
The bottom line here is as I originally posted, the moved was botched, plain and simple. This thread is proof of that. We were told it was competent IT professionals hosting our investment "at a world class hosting facility". It is clear that was not the case. You aren't fooling anyone on the thread, I have received quite a number of PM's in support of my stance and my vocal approach. I welcome any responses, public or private.
I exchanged quite a number of PM's with DZ during the outage and based on those decided to give you the benefit of the doubt, figuring you would do the right thing. That clearly did not happen. $150 off of hosting is a pittance. Your botched move cost us 7BTC, ~$5500!
It is disingenuous and incredibly insulting to all here that the blame is always placed elsewhere and no real responsibility is taken. I am OK losing money to increasing difficulty, market conditions, low BTC value, true Acts of God etc. Those are all real issues that are truly outside anyones control. This botched move does not fall into those categories.
We placed our $$$ and our trust in you and we got screwed, royally.