I don't read PMs for fun. Very very rarely, I need to read PMs for technical, administrative, or legal reasons, but I try to avoid this as much as possible. I don't want to spy on anyone's personal business. obviously lol, i bet the first thing theymos did when he became admin was to read satoshi's pms.
I've never read Satoshi's PMs. There is nothing to say they didn't subpoena the records of the shroomery
They did.
|
|
|
Indeed. Which is why Matt's relay system and IBLT are needed, and will massively improve the situation for block propagation. People arguing for 1MB forever ignore this, yet when 20MB blocks finally appear they will take less than 1MB to broadcast through the network. It will only be bootstrapping new nodes and re-sync which will need the full blocks transmitted.
Those things help, but all full nodes still need to download all new transactions. Even with all proposed optimizations, you'll still need to download about 20 MB and upload 20*x MB (where x is some number that depends on but is probably less than the number of peers you have) for every 20 MB block you receive. This is better than the current network, which probably requires more than twice as much bandwidth for both download and upload. But it's not a magic bullet. You proposed increasing the limit to 2MB in 2 years. This is what should have been done in Feb 2013 when this debate last came up in earnest. Unfortunately, this is too little too late. Of course IBLT was not considered at that time.
If 2 MB is all that can be agreed on now, then it's better to schedule that hard fork ASAP rather than agree on something marginally higher in 6-12 months. I personally think that 10 MB would probably be fine, though I know that there are many people who would disagree with me.
|
|
|
I haven't read this thread much at all, so maybe this has already been said, but I've been noticing a lot of people saying things like, "If the block size isn't raised, then that would be bad because ..." But just wanting larger blocks isn't a good argument for increasing the max block size. By far the most important issue is what the network can support. We're just going to have to learn to deal with the fact that the network's capacity is limited and fees will probably always be larger than many people want.
Exactly what block size the network can support is very much debatable. I currently think that 10 MB would be fine and 50 MB would be too much, though these are mostly just feelings. There should be more rigorous study of the actual limits of the network. (Gavin's done some nice work on the software/hardware front, though I'm still worried about the capabilities of typical Internet connections, and especially how they'll increase over time.)
|
|
|
It seems that hotmail.com is silently dropping forum emails for some reason, even though they say via SMTP that they're accepting the emails. You should complain to them.
|
|
|
Seems pretty plausible, though you're right that it can't be proven. Still, people should stay away from 999dice.com until they change the way they deal with seeds to rule out this sort of tampering.
|
|
|
The problem with that is that many of the current anti-DoS measures distinguish between users by looking at their IP address. For example, if you're not logged in then your IP is limited to one search every 100 seconds (or something like that -- I don't remember the exact number) to prevent you from overloading the server. IMO Tor needs to add some configurable proof-of-work mechanism to hidden services for them to be widely usable. For example, one thing that comes to mind is that the client could prove that he's holding x GB of data unique to a certain hidden service, and after verifying this, Tor could pass a unique private IP for that client (eg. 10.1.2.3) to the hidden service's web server. (The IP would be different per hidden service, so it would only be a minor reduction in privacy -- the hidden service would only be able to track you across its own pages.) Then the standard idea of "block IPs that abuse the server" could be used by the hidden service.
|
|
|
While we have you, how does the proxy know if the source image has changed content, even if its at the same URL as before? Would it take ~ a month in order to refresh or is there another check that happens?
There is no caching on the bitcointalk.org side. The image is always passed directly from the source server to the user. Any Expires or Cache-Control headers sent by the origin server are passed through as well, so caching might be done by the client. The code is computed from the URL, not the image data.
|
|
|
That's annoying. I suppose there's not much I can do about it, though.
|
|
|
Not too sure but I think the reason we can't just do 2mb instead of 1 is because we would need to fork again Throwing it at 20mb should resolve the issue for quite some time.
What I support most strongly is that we do substantially-delayed hard forks with conservative values. Then doing hard forks regularly isn't such a big issue. For example, Bitcoin Core can be immediately modified to increase the max block size to 2 MB on a specific date 2 years from now. I think that pretty much everyone would be basically OK with this max block size (and even higher values might be widely acceptable). By the time the change actually takes effect in 2 years, everyone will already have upgraded because very few people use 2-year-old software. Businesses and users won't have to go out of their way to choose one fork over another, and so there will be less room for messiness. Then if a really nice academic study convincingly arguing that 5 MB blocks are safe is published 1 week after the 2-year-delayed change is added, another 2-year-delayed change can be added right away with very little extra cost. After ~2 years, the max block size will increase to 2 MB, and then a week later change to 5 MB. Yes, 2 years is a long time. But I'm confident that Bitcoin will survive that long with 1 MB blocks.
|
|
|
There was an expected 502 error for ~2 minutes just now.
|
|
|
The last 6 hours no, but the last ~20 hours I was getting similar error messages (both the unable to connect, and the SMF is unable to connect to the server messages) that I was getting when we had recent downtime.
We were fixing things about 20 hours ago. Hopefully that's what you saw. If anyone sees any nginx or "connection problems" errors in the next day, please post here.
|
|
|
Did anyone get errors in the last 6 hours (or the next ~12 hours)? I'm hoping that it's stable now.
|
|
|
Bitcoin can be changed in a backward-incompatible way and still remain Bitcoin. It was done by Satoshi with the version checksum change, for example. Hopefully there won't be any huge hard fork controversy in the future. It'd be a big mess if people had to actively decide between one fork or another. If this does happen, then I will endorse the most correct version of Bitcoin, and this version is what I'll mean when I say "Bitcoin". In particular, these are some principles that any potential hard fork must not violate: - The network must remain substantially decentralized. - The inflation schedule must be the same or lower/slower. (Though I'm not 100% sure whether lowering inflation would be OK.) - No one should be allowed by design to steal your money. - As much as reasonably possible, no one should be able to prevent you from spending your money. - Anonymity should be at least possible. I will oppose any unsafe hard fork, even if it's proposed by Gavin. I and the sites I have some hand in are independent of the dev group, the Bitcoin Foundation, and other companies/organizations. I don't know whether Gavin's current proposal is safe, so the only thing I'm doing now is recommending caution. It was wrong and I hope the separation of miner and development continues for at least a few decades before miners and developers are so embedded with each other we have a repeat of what led to Bitcoin in the first place.
I wasn't a fan of the whole idea of giving miners any special say on the issue. (Though it wasn't actually much of a vote, since miners could only confirm/reject P2SH.) Miners are basically employees of the network, and it should be the actions of users and businesses that influence what miners do, not the other way around. It would have been possible and better for users and businesses to (at a reasonable pace) force miners to accept the P2SH change.
|
|
|
AWS would likely protect us from DDOS attacks in the future.
If by "protect" you mean "allow attackers to use a limitless amount of money"... AWS doesn't have much built-in DDoS protection AFAIK. While you can definitely use AWS as part of a very scalable and stable architecture, it's easy for costs to go out of control. For example, a site I used called inkblazers.com is apparently going to shut down very soon because they're spending over $60,000 per month on their AWS hosting. The forum's hosting costs a few thousand dollars per month. Their Alexa rank is 118,092, whereas bitcointalk.org's Alexa rank is 4,568. (They probably have to deliver a lot more data, but that cost is still absolutely ridiculous.) And AWS doesn't guarantee uptime. Reddit, for example, is apparently based on AWS, and temporary overload errors are very common there. On an average day, bitcointalk.org is usually more stable than Reddit, I think. It's far easier to do things mostly-right if you just use a traditional single server. This also allows more control and better security. (Amazon doesn't have the best reputation for protecting customers.) Responding to things that other people keep saying elsewhere: Yes, the forum has a lot of money. But this is mostly due to BTC value increases, and it would quickly be depleted if costs increased much higher. Spending at the level of most VC-backed startups would be absolutely reckless. I don't think that the forum can afford more than maybe one additional full-time employee, for example. (Currently the only full-time employees are Slickage. I and the moderators are paid so little that we're basically volunteers. There are also a few part-time contractors.) Believe me, no one finds it more annoying than I do when the forum is down -- I'm usually the one who has to fix it and try to prevent it from happening again... In this case, things are breaking without any obvious cause, so it might take some time to figure this out and get things rock-solid again.
|
|
|
It has basically no features. It's barely better than the commenting system on a blog.
|
|
|
Most of the recent downtime is caused by MariaDB very rarely hanging on a random query and preventing all other queries from completing. If this sort of thing happens when everyone is sleeping/away (especially on weekends), then there's downtime. Someone on the MariaDB IRC says that it might be a bug in the version of MariaDB that I upgraded to as part of the server change.
|
|
|
Searching is enabled again now. I also made several improvements to search. It should be substantially faster now, and maybe also more accurate. (SMF was extremely buggy in this area -- it's surprising that search was even usable before.)
|
|
|
I undeleted 15 PMs from evrynet to you. This might not be all of the PMs -- PMs are removed from the database if the sender and all recipients delete them.
|
|
|
|