Bitcoin Forum
May 04, 2024, 03:47:00 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 [40] 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 ... 158 »
781  Other / Beginners & Help / Re: Whitelist Requests (Want out of here?) on: February 22, 2013, 05:15:53 AM
HOLY! Wow, Ripple must be pretty popular...

qnfauf: OK
djbaniel: OK
wingsuit: OK
desper: OK
tylercrumpton: OK
7kansb: OK
mareo87: OK
josiahgarber: OK
bitbully: OK
dmanti: OK
kalinka: OK
jesse: OK
Abn0rmal: OK
mumung: OK
Codzart: OK
782  Bitcoin / Bitcoin Discussion / Re: How merchant will behave when there is hard fork & they are not sure who win? on: February 22, 2013, 04:15:44 AM

Exactly. In fact, BradZimdack's thoughts are exactly same thoughts that I had a few days ago, except my thoughts were on an individual level. So, let me be clear: what set me into panic mode was not the discussions about how the limit would be raised, but the discussions about whether it would be raised at all with opposition from a significant amount of people. That turned the idea of raising the limit into a complete non-starter since it requires a hard fork, despite the fact that changing the maximum block size has been the plan since the very beginning. I can understand not liking Gavin's plan of just allowing the blocksize to be unlimited and having the market sort things out (I don't, either), but I'm sorry, it takes a special kind of stupid to say that no hard forks can happen ever because you "subscribed to the constants" instead of the spirit of Bitcoin like the rest of us did.

If you are arguing for staying at 1 MB/block because that is the constant you would choose today if you pretend that we could reset the constant to whatever we wanted to (whether that be another constant or an algorithm), that's fine. It's a valid choice. I'd be interested to know why you prefer specifically that number over the other options. But to argue that we should keep the limit there because all change is evil is both irrational and stupid. Whatever we decide to do, it's not going to be something that we rush into. Also, you'll notice that in these discussions not a single supporter for this change has even proposed changing the important constants like the total final money supply, so the slippery-slope argument does not apply.

+1 I fully agree.

Also, we should turn the max block size problem into an opportunity. As many posters have already said: it is an opportunity to replace it with an algorithm which provides an incentive for users to add fees to their transactions, maintaining an element of block-space scarcity enhancing miner revenue.
Yup, and there's a pretty good start on how to do that already.

Now, just to illustrate what the people who are against this hard fork are like, consider the BIP 16/17 debate (assuming you were around to see that shit-storm). These people are like the people who said that we shouldn't have either. Even Luke-jr was stunned to hear that. Sure, that's not the best example since we still don't use it much today (but that's mainly because a UI and payment protocol don't exist), but you can see the point I'm trying to make.
783  Bitcoin / Bitcoin Discussion / Re: How merchant will behave when there is hard fork & they are not sure who win? on: February 22, 2013, 03:55:09 AM
The firm I work for, which has made substantial investment into Bitcoin, has already thought quite a bit about this.  This is what we're doing and thinking:

* While this block size issue remains such a big uncertainty, we have drastically slowed our pace of investment and we've shelved some start-ups that were already in progress.  We're not stopping or backing out, but we're proceeding much slower and more cautiously until a clearer resolution to this problem appears.

* If we approach the 1MB limit and a solution does not appear forthcoming, we'll cease all new investment.

* If we pass the 1MB limit without a solution, seeing even the slightest hint that Bitcoin's competitive advantages over the conventional banking and payment systems are being eroded due to an inability to scale, we'll dump all our bitcoin assets and holdings.

* If a controversial solution is proposed, with fierce arguments on multiple sides, we will follow Gavin's fork, even if it's not ideal, on all our sites that accept BTC.

* If the fork hits, and there's even the slightest uncertainty as to which will survive, we will immediately dump all our bitcoin assets and holdings.  We will remove Bitcoin from all of our sites that accept multiple payment methods -- at least while we wait to see how things play out.

* If one fork kills off the other, we'll adopt that one and go back to business as usual.

* If both forks survive, with both being widely accepted (for example if Mt.Gox begins accepting Bitcoin-A and Bitcoin-B), we'll accept neither, dump everything, and write off blockchain based currencies and too risky and too unstable.

What we really hope to see is a nice, smooth transition to a system that scales, which very large majority of the network, including major service providers, agree to.

Thanks for your post. I hope that this is a warning to the community that uncertainty over this issue is already having a negative impact on Bitcoin.
Exactly. In fact, BradZimdack's thoughts are exactly same thoughts that I had a few days ago, except my thoughts were on an individual level. So, let me be clear: what set me into panic mode was not the discussions about how the limit would be raised, but the discussions about whether it would be raised at all with opposition from a significant amount of people. That turned the idea of raising the limit into a complete non-starter since it requires a hard fork, despite the fact that changing the maximum block size has been the plan since the very beginning. I can understand not liking Gavin's plan of just allowing the blocksize to be unlimited and having the market sort things out (I don't, either), but I'm sorry, it takes a special kind of stupid to say that no hard forks can happen ever because you "subscribed to the constants" instead of the spirit of Bitcoin like the rest of us did.

If you are arguing for staying at 1 MB/block because that is the constant you would choose today if you pretend that we could reset the constant to whatever we wanted to (whether that be another constant or an algorithm), that's fine. It's a valid choice. I'd be interested to know why you prefer specifically that number over the other options. But to argue that we should keep the limit there because all change is evil is both irrational and stupid. Whatever we decide to do, it's not going to be something that we rush into. Also, you'll notice that in these discussions not a single supporter for this change has even proposed changing the important constants like the total final money supply, so the slippery-slope argument does not apply. Stop trying to kill Bitcoin.
784  Other / Beginners & Help / Re: Whitelist Requests (Want out of here?) on: February 20, 2013, 06:46:33 PM
skis: OK
785  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 20, 2013, 08:03:48 AM
I think it is clear that throwing vast sums of money into slim chances of even a slight edge is not at all unlikely behavior for miners.
Agreed, which is why the requirements for increasing the maximum block size need to be difficult, but not impossible. Luckily, if we fail, we can just implement a soft-fork with either a manual limit that's lower than the automatic limit, or we can add new requirements for raising the maximum block size.

In fact, that just gave me another idea, we can make it so that whatever we end up doing, it will expire at a certain block a few years in the future and revert to unlimited block size. That way, if we screw up, we can keep trying again via a soft-fork at that expiration point.
786  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 20, 2013, 07:30:03 AM
So that particular calculation for automatically "adapting" is simply an example of why I am doubtful that automation is the way to go, since basically all it amounts to is carte blanche for all miners who want mining to be limited to a privileged elite can simply spout the biggest blocks permitted, constantly, thus automatically driving up the size permitted, letting them spout out even bigger blocks and so on as fast as possible until the few of them left are able to totally control the whole system in a nice little cartel or oligarchy or plutarchy or kakistrocracy or whatever the term is for such arrangements. (Sounds like an invitation to kakistocracy actually, maybe?)

Basically automatic "adaptation" seems more like automatic aquiescence to whatever the cartel wants, possibly happening along the way to leave them with incentives to maintain appearances of controlling ess of the network than they actually do so that if/when they do achive an actual monopoly it will appear to the public as maybe pretty much any number of "actors" the monopoly chooses to represent itself as for public relations purposes. (To prevent panics caused by fears that someone controls 51%, for example.)

-MarkM-

If they manage to do that in such a way that keeps global orphan rates down and the difficulty at least stable (so this would have to be done slowly) all while losing boatloads of money by essentially requiring no transaction fee ever, good for them. Other 51% attacks would be more economical, especially since this attack would be trivial to detect in the non-global-state side of things. For example, people would notice a large amount of previously-unheard transactions in these blocks, or extremely spammy transactions. Worst-case, they could get away with including all legitimate transactions and a small amount of their own and not be detected, raising the limit to have a block be able to contain just above the amount of typical transactions made per block period.

However, other considerations can be added. That suggestion is by no means final. Some extreme ideas (not outside the box, I know, but just to prove that this attack can be prevented with more constraints):
*To increase max block size, global orphan rate must be below 1%.
*To increase max block size, 95% of the blocks in the last difficulty period must be at/very near the current max size.
*Max block size can only increase by a factor of 2 over four years.

For more ideas, think about the process of raising the block size limit in manual terms. What would/should we consider before manually raising the block size limit? Let's see if we can codify that...

787  Other / Meta / Re: 0.8 announcement in "Important Announcements" section? on: February 20, 2013, 07:01:13 AM
The idea is that people should subscribe to that board for email updates.
788  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 20, 2013, 06:38:54 AM
Gavin and co, drop your silly suggestions of using an idea that can't be globally synchronized without a central authority. This suggestion does exactly what you are looking for in a decentralized manner and it's globally synchronized:
I actually posted the below in the max_block_size fork thread but got absolutely no feedback on it so rather than create a new thread to get exposure on it, I am reposting it here in full as something to think about with regards to moving towards having a fairly simple process to create a floating blocksize for the network that is conservative enough to avoid abuse and will work in tandem with difficulty so no new mechanisms need to be made. I know there are probably a number of holes in the idea but I think it's a start and could be made viable so that we get a system that allows blocks to get bigger, but doesn't run out of control such that only large miners can participate, and also avoids situations where manipulations of difficulty could occur if there were no max blocksize limit. Ok, here goes.

I've been stewing over this problem for a while and would just like to think aloud here....

I very much think the blocksize should be network regulated much like difficulty is used to regulate propagation windows based on the amount of computation cycles used to find hashes for particular difficulty targets. To clarify, when I say CPU I mean CPUs, GPUs, and ASICs collectively.

Difficulty is very much focused on the network's collective CPU cycles to control propagation windows (1 block every 10 mins), avoid 51% attacks, and distribute new coins.

However the max_blocksize is not related to computing resources to validate transactions and regular block propagation, it is geared much more to network speed, storage capacity of miners (and includes even non-mining full nodes) and verification of transactions (which as I understand it means hammering the disk). What we need to determine is whether the nodes supporting the network can quickly and easily propagate blocks while not having this affect the propagation window.

Interestingly there is a connection between CPU resources, the calculation of the propagation window with difficulty targets, and network propagation health. If we have no max_blocksize limit in place, it leaves the network open to a special type of manipulation of the difficulty.

The propagation window can be manipulated in two ways as I see it, one is creating more blocks as we classically know, throw more CPUs at block creation, and we transmit more blocks, more computation power = more blocks produced, and the difficulty ensures the propagation window doesn't get manipulated this way. The difficulty is measured by timestamps in the blocks to determine whether more or less blocks in a certain period were created and whether difficulty goes up or down. All taken care of.

The propagation window could also be manipulated in a more subtle way though, that being transmission of large blocks (huge blocks in fact). Large blocks take longer to transmit, longer to verify, and longer to write to disk, though this manipulation of the number of blocks being produced is unlikely to be noticed until a monster block gets pushed across the network (in a situation where there is no limit on blocksize that is). Now because there is only a 10 minute window the block can't take longer than that I'm guessing. If it does, difficulty will sink and we have a whole new problem, that being manipulation of the difficulty through massive blocks. Massive blocks could mess with difficulty and push out smaller miners, causing all sorts of undesirable centralisations. In short, it would probably destroy the Bitcoin network.

So we need a maximum block size that is high enough that the vast majority of nodes are comfortable with it, and isn't so big that it can be used to manipulate the difficulty by artificially slowing propagation accross the network with massive blocks. With the help of the maintaining of the propagation window through it's difficulty, we may be able to determine whether the propagation of blocks is slowing and whether the max_blocksize should be adjusted down to ensure the propagation window remains stable.

Because the difficulty can be potentially manipulated this way we could possibly have a means of knowing what the Bitcoin network is comfortable with propagating. And it could be determined thusly:

If the median size of the blocks transmitted in the last difficulty period is bumping up against the max_blocksize (median being chosen to avoid situations where one malicious entity, or entities tries to arbitrarily push up the max_blocksize limit), and the difficulty is "stable", increase the max_blocksize (say by 10%) for the next difficulty period (say the median is within 20% of the max_blocksize), but if the median size of blocks for the last period is much lower (say less than half the current blocksize_limit), then lower the size by 20% instead.

However, if the If the median size of the blocks transmitted in the last difficulty period is bumping up against the max_blocksize and the difficulty is NOT stable, don't increase the max_blocksize since there is a possibility that the network is not currently healthy and increasing or decreasing the max_blocksize is a bad idea. Or alternatively in those situations lower the max_blocksize by 10% for the next difficulty period anyway (not sure if this is a good idea or not though).

In either case the 1mb max_blocksize should be the lowest the blocksize should go to if it continued to shrink. Condensing all that down to pseudocode...

Code:
IF(Median(blocksize of last difficulty period) is within 10% of current max_block_size 
AND new difficulty is **higher** than previous period's difficulty),
    THEN raise max_block_size for next difficulty period by 10%

otherwise,

Code:
IF(Median(blocksize of last difficulty period) is within 10% of current max_block_size 
AND new difficulty is **lower** than previous period's difficulty),
    THEN lower max_block_size for next difficulty period by 10% UNLESS it is less than the minimum of 1mb.


Checking the stability of the last difficulty period and the next one is what determines whether the network is spitting out blocks at a regular rate or not, if the median blocksize of blocks transmitted in the last difficulty period is bumping up against the limit, and difficulty is going down, it could mean a significant number of nodes can't keep up, esp. if the difficulty needs to be moved down, that means that blocks aren't getting to all the nodes in time and hashing capacity is getting cut off because they are too busy verifying the blocks they received. If the difficulty is going up and median block size is bumping up against the limit, then there's a strong indication that nodes are all processing the blocks they receive easily and so raising the max_blocksize limit a little should be OK. The one thing I'm not sure of though is determining whether the difficulty is "stable" or not, I'm very much open to suggestions the best way of doing that. The argument that what is deemed "stable" is arbitrary and could still lead to manipulation of the max_blocksize, just over a longer and more sustained period I think is possible too, so I'm not entirely sure this approach could be made foolproof, how does calculating of difficulty targets take these things into consideration?

OK, guys, tear it apart.
In plain English, this means that if over 50% of the mining power (which shouldn't be only a single miner by definition since we'd be totally screwed anyway) think that they can make more money in overall fees by allowing the maximum block size to increase, they can each vote for this increase by hitting (close to) the limit in each block they make, which in turn proves that the network can handle the increase, especially if we use this idea:
So we need a maximum block size that is high enough that the vast majority of nodes are comfortable with it, and isn't so big that it can be used to manipulate the difficulty by artificially slowing propagation accross the network with massive blocks. With the help of the maintaining of the propagation window through it's difficulty, we may be able to determine whether the propagation of blocks is slowing and whether the max_blocksize should be adjusted down to ensure the propagation window remains stable.

A measure of how fast blocks are propagating is the number of orphans.  If it takes 1 minute for all miners to be notified of a new block, then on average, the number of orphans would be 10%.

However, a core of miners on high speed connections could keep that down and orphans are by definition not part of the block chain.

Maybe add an orphan link as part of the header field.  If included, the block links back to 2 previous blocks, the "real" block and the orphan (this has no effect other than proving the link).  This would allow counting of orphans.  Only orphans off the main chain by 1 would be acceptable.  Also, the header of the orphan block is sufficient, the actual block itself can be discarded.

Only allowing max_block_size upward modification if the difficulty increases seems like a good idea too.

A 5% orphan rate probably wouldn't knock small miners out of things.  Economies of scale are likely to be more than that anyway.

Capping the change by 10% per 2 week interval gives a potential growth of 10X per year, which is likely to be at least as fast as the network can scale.

So, something like

Code:
if ((median of last 2016 blocks < 1/3 of the max size && difficulty_decreased) || orphan_rate > 5%)
 max_block_size /= 8th root of 2
else if(median of last 2016 blocks > 2/3 of the max size && difficulty_increased)
 max_block_size *= 8th root of 2 (= 1.09)

The issue is that if you knock out small miners, a cartel could keep the orphan rate low, and thus prevent the size from being reduced.
So, no increase in the maximum block size could ever hurt the miners more than a small amount. And if a miner doesn't think they are making enough in fees and that they won't be able to make up the difference in volume? They simply fill their blocks to just under the point where they would be considered "full" for the purposes of this calculation. There are even more economic factors here, but it makes my head hurt to even think of them.

- snip -
My objection i don't see answered is, what stops other miners to spontaneously build longer chain with smaller blocks that more easily propagate? (In absence of said cartel.) . . .
The proof-of-work system.
How, I'm asking again? The proof of work will naturally favor smaller blocks that can be spread faster to majority of the network (and thus support decentralization). Yes, worst connected nodes will be left behind, but as long as there are plenty of "middle class" nodes that will naturally favor blocks less than certain size and slow propagation of oversized blocks making them into orphans more likely, I don't see a problem.

The point of this debate is that the incentive for miners with faster bandwidth is to intentionally pad out their blocks to be so big that only slightly more than half of the hashing power on the network can receive them before the next block is discovered.  This leaves almost 50% of the hashing power on the network continually working on old blocks never to earn any rewards while those with the highest bandwidth leave them behind and increase their own rewards.  Doing so forces those with lower bandwidth out of the market, allowing the process to repeat, consolidating the mining into only those with the absolute highest bandwidth on the network.

The proof of work prevents the miners with slower bandwidth from solving a block any faster than those with higher bandwidth, and the bandwidth issue keeps them from working on the correct block.
And the neat part about the suggestion I quoted in this post is that any hashing power on the network continually working on old blocks is creating orphan blocks, which are directly considered in the maximum block size algorithm. Network security, therefore, is considered more important than the use of Bitcoin as a payment system, but a compromise between those two ideas is allowed when it benefits ALL users through more security and cheaper individual transactions.

Doctors have the right idea: Primum non nocere.  Or if you prefer: if it ain't broke, don't fix it.

Clearly Bitcoin, as currently implemented, 1MB limitation included, is doing something right.  Otherwise demand wouldn't be growing exponentially.  Gavin: with each Bitcoin purchase, users are implicitly "voting" that they approve of its current embedded constants (or at least, they don't mind them, yet).
No, they aren't. They're voting for a crypocurrency that they think of as "Bitcoin". Every piece of Bitcoin documentation involving the 1 MB limit has been clear that it was temporary in order to protect the network from attacks while it was in its infancy. As for the people who didn't read the documentation, they would have no idea that this limit even currently exists since we aren't ever hitting it. Therefore, any *coin that has a permanent 1 MB limit cannot, by definition, be called "Bitcoin".
789  Other / Meta / Re: [bug] cliking "delete post" with middle button kills it , no confirmation dialog on: February 20, 2013, 02:48:32 AM
Does the forum not pop up with an alert on "Are you sure you would like to delete this post?"

EDIT: Ah, it looks like it does! This could be fixed by making the delete button call a JS function that pops up that, instead of directly linking to it.

Then it wouldn't work if you have JavaScript disabled.
What about changing the hrefs to JS links (or whatever, most likely just "#self" or "javascript:void(0)" or something, then change the onclick method) via JS on page load?
790  Other / Meta / Re: 0.8 announcement in "Important Announcements" section? on: February 19, 2013, 11:41:54 PM
Huh. I don't know how we missed that...

Gavin should really be given the permissions to post there. In fact, all community leaders/moderators (IRC, etc) should have that.
791  Bitcoin / Important Announcements / Bitcoin-Qt / bitcoind version 0.8.0 released on: February 19, 2013, 11:40:36 PM
Bitcoin-Qt version 0.8.0 is now available from:
  http://sourceforge.net/projects/bitcoin/files/Bitcoin/bitcoin-0.8.0/

This is a major release designed to improve performance and handle the
increasing volume of transactions on the network.

Please report bugs using the issue tracker at github:
  https://github.com/bitcoin/bitcoin/issues

How to Upgrade
--------------

If you are running an older version, shut it down. Wait
until it has completely shut down (which might take a few minutes for older
versions), then run the installer (on Windows) or just copy over
/Applications/Bitcoin-Qt (on Mac) or bitcoind/bitcoin-qt (on Linux).

The first time you run after the upgrade a re-indexing process will be
started that will take anywhere from 30 minutes to several hours,
depending on the speed of your machine.

Incompatible Changes
--------------------

This release no longer maintains a full index of historical transaction ids
by default, so looking up an arbitrary transaction using the getrawtransaction
RPC call will not work. If you need that functionality, you must run once
with -txindex=1 -reindex=1 to rebuild block-chain indices (see below for more
details).

Improvements
------------

Mac and Windows binaries are signed with certificates owned by the Bitcoin
Foundation, to be compatible with the new security features in OSX 10.8 and
Windows 8.

LevelDB, a fast, open-source, non-relational database from Google, is
now used to store transaction and block indices.  LevelDB works much better
on machines with slow I/O and is faster in general. Berkeley DB is now only
used for the wallet.dat file (public and private wallet keys and transactions
relevant to you).

Pieter Wuille implemented many optimizations to the way transactions are
verified, so a running, synchronized node uses less working memory and does
much less I/O. He also implemented parallel signature checking, so if you
have a multi-CPU machine all CPUs will be used to verify transactions.

New Features
------------

"Bloom filter" support in the network protocol for sending only relevant transactions to
lightweight clients.

contrib/verifysfbinaries is a shell-script to verify that the binary downloads
at sourceforge have not been tampered with. If you are able, you can help make
everybody's downloads more secure by running this occasionally to check PGP
signatures against download file checksums.

contrib/spendfrom is a python-language command-line utility that demonstrates
how to use the "raw transactions" JSON-RPC api to send coins received from particular
addresses (also known as "coin control").

New/changed settings (command-line or bitcoin.conf file)
--------------------------------------------------------

dbcache : controls LevelDB memory usage.

par : controls how many threads to use to validate transactions. Defaults to the number
of CPUs on your machine, use -par=1 to limit to a single CPU.

txindex : maintains an extra index of old, spent transaction ids so they will be found
by the getrawtransaction JSON-RPC method.

reindex : rebuild block and transaction indices from the downloaded block data.

New JSON-RPC API Features
-------------------------

lockunspent / listlockunspent allow locking transaction outputs for a period of time so
they will not be spent by other processes that might be accessing the same wallet.

addnode / getaddednodeinfo methods, to connect to specific peers without restarting.

importprivkey now takes an optional boolean parameter (default true) to control whether
or not to rescan the blockchain for transactions after importing a new private key.

Important Bug Fixes
-------------------

Privacy leak: the position of the "change" output in most transactions was not being
properly randomized, making network analysis of the transaction graph to identify
users' wallets easier.

Zero-confirmation transaction vulnerability: accepting zero-confirmation transactions
(transactions that have not yet been included in a block) from somebody you do not
trust is still not recommended, because there will always be ways for attackers to
double-spend zero-confirmation transactions. However, this release includes a bug
fix that makes it a little bit more difficult for attackers to double-spend a
certain type ("lockTime in the future") of zero-confirmation transaction.

Dependency Changes
------------------

Qt 4.8.3 (compiling against older versions of Qt 4 should continue to work)


Thanks to everybody who contributed to this release:
----------------------------------------------------

Alexander Kjeldaas
Andrey Alekseenko
Arnav Singh
Christian von Roques
Eric Lombrozo
Forrest Voight
Gavin Andresen
Gregory Maxwell
Jeff Garzik
Luke Dashjr
Matt Corallo
Mike Cassano
Mike Hearn
Peter Todd
Philip Kaufmann
Pieter Wuille
Richard Schwab
Robert Backhaus
Rune K. Svendsen
Sergio Demian Lerner
Wladimir J. van der Laan
burger2
default
fanquake
grimd34th
justmoon
redshark1802
tucenaber
xanatos

792  Other / Beginners & Help / Re: Whitelist Requests (Want out of here?) on: February 19, 2013, 12:43:26 AM
Bitcoinmaniac: OK
shamaniotastook: OK
atomicdog: OK
nostradamus: OK
17chk4u: OK
drb: OK
simulacrum: OK
EricCartman: OK
Klisetron: OK
Bitbowoman: OK
lovenlifelarge: OK
SomeWhere: OK
litehosting: OK
pikeadz: OK
Cognitive Cryptography: OK
793  Other / Beginners & Help / Re: Whitelist Requests (Want out of here?) on: February 15, 2013, 10:02:05 PM
Plunt: OK
andrewsna: OK
794  Other / Beginners & Help / Re: Whitelist Requests (Want out of here?) on: February 15, 2013, 02:39:27 AM
pier: OK
bitcoinbeliever: OK
rdymac: OK
PhantomSpark: OK
austinzsoice: OK
matada: OK
MonadTran: OK
Arthur3000: OK
MJD: OK
toorik: OK
dakiller: OK
bsgmz: OK
Choadzilla: OK
bipolarbear187: OK
Superschupp: OK
longhornbits: OK
HighInBC: OK
caramelsun: OK
BackRoomBitcoin: OK
SammyBlackstar: OK
kryptoweb: OK
jchysk: OK
gdsl: OK
BitHub: OK
5flags: OK
MomBoyMiners: OK
bigmint: OK
795  Other / Meta / Re: Upcoming downtime on: February 13, 2013, 03:17:22 AM
502 Sad
796  Bitcoin / Important Announcements / Re: [ANN] Bitcoinica Consultancy abandons customers. Bitcoinica to enter Liquidation on: February 13, 2013, 03:14:56 AM
Quote
To All Investors
 
We have received a number of requests for updates on the liquidation of Bitcoinica LP (In Liquidation).
As noted in my email of 20th December 2012, we have requested details of the account records held by Mt Gox.  To date, Mt Gox has refused to release these details on account of confidentiality and has requested that the Liquidators provide various details of the accounts before it releases this information.  As reported in the Liquidators first report, we do not have any company records whatsoever and are reliant upon the assistance of the limited partners and Mt Gox to take control of the bitcoins and cash assets.  We cannot make any progress in the liquidation unless Mt Gox provides us with access to Bitcoinica’s accounts.  We are currently awaiting information from the limited partners so that we can progress the account access issues with Mt Gox.
We apologise for the slow progress in the liquidation, but hopefully the investors will appreciate that we are making every effort to recover the bitcoins and funds in order to make a distribution to creditors as soon as possible.  We will continue to provide updates periodically to keep all investors abreast of the liquidation progress.
 
Kind Regards
Taslim Bhamji
Senior Insolvency Administrator
797  Bitcoin / Development & Technical Discussion / Re: The MAX_BLOCK_SIZE fork on: February 05, 2013, 11:22:02 PM
Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.
...let's lower that constant. ...
In summary, I propose that to avoid the tragedy of the commons problem, instead of limiting the available space, we limit the available time allowed for the block to propagate instead. Now THAT is a Bitcoin 2.0 (or rather, 1.0)

For the rest of us who are catching up, are you proposing what seems far more radical than eliminating the 1Mb limit?
Quite possibly. However, if we think of the 10 minute constant as not actually having to stay at that constant, we can adjust it so that at the time we disable the 1 MB limit, the largest block that miners would practically want to make at that time would be 1 MB. Basically, this would protect us from having a 1 MB limit one day, to a practical 50 MB limit (or whatever is currently practical with the 10 minute constant). I mainly want people to remember that changing the block time is also something that's also able to be on the table.

Can you please clarify. Are you proposing reducing the 10 min average block creation time?
Yes.

If so, what happens to the 25 BTC reward which would be excessive, and need a pro-rata reduction for increased block frequency?
Just like you said, it would have a pro-rata reduction for increased block frequency. Sorry, I assumed that was obvious, since changing anything about the total currency created is absolutely off the table.

EDIT: Oh, and of course, there must be tolerance levels too (if I'm X blocks behind the chain I once rejected, I'll give up and start building on top of it). You don't want to create that many chain forks! Smiley
Absolutely. Of course, that sadly means that we won't be able to ever trust a block until it gets past that point (which I think should be 2-4 blocks).

Would it really?
Yes. You wouldn't be able to trust that a majority of the network acknowledged a block until it gets past the point where all clients are required to accept it as part of the chain.

Imagine that only 10% of the network accepts blocks over 10 MB and 100% accepts blocks less than 1 MB. What if that 10% got lucky and generated two 11 MB blocks in a row? Well, the other 90% would just ignore them because they are too large. So, those blocks get orphaned because the rest of the network found three small blocks. If you just accepted the 11 MB blocks as a confirmation and sent goods because of it, you could be screwed if there was a double-spend.
798  Bitcoin / Development & Technical Discussion / Re: The MAX_BLOCK_SIZE fork on: February 05, 2013, 06:46:12 AM
EDIT: Oh, and of course, there must be tolerance levels too (if I'm X blocks behind the chain I once rejected, I'll give up and start building on top of it). You don't want to create that many chain forks! Smiley
Absolutely. Of course, that sadly means that we won't be able to ever trust a block until it gets past that point (which I think should be 2-4 blocks). So, to mitigate the damage that will cause to the practical confirmation time...
Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.
...let's lower that constant. Additionally, by lowering the block-creation-time constant, you increase the chances of there being natural orphans by a much larger factor than you are lowering the constant (5 minute blocks would on average have 4x as many orphans as 10 minute blocks over the same time period). Currently, we see that as a bad thing since it makes the network weaker against an attacker. So, the current block time was set so that the block verification time network-wide would be mostly negligible. Let's make it so that it's not.

To miners, orphans are lost money, so instead of using such a large constant for the block time so that orphans don't happen much in the first place, force the controlling of the orphan rate onto the miners. To avoid orphans, they'd then be forced to use such block-ignoring features. In turn, the smaller the constant for block time that we pick, the exponentially smaller the blocks would have to be. Currently, I suspect that a 50 MB block that was made up of pre-verified transactions would be no big deal for the current network. However, a .2 MB block on a 2.35 seconds per block network (yes, extreme example) absolutely would be a big deal (especially because at that speed even an empty block with just a coinbase is a problem).

There are also some side benefits: because miners would strongly avoid transactions most of the network hasn't seen, only high-fee transactions would be likely to make it into the very next block, but many transactions would make it eventually. It might even encourage high-speed relay networks to appear, who will require a cut of the transaction fees the miners make in order to let them join this network.

In summary, I propose that to avoid the tragedy of the commons problem, instead of limiting the available space, we limit the available time allowed for the block to propagate instead. Now THAT is a Bitcoin 2.0 (or rather, 1.0)
799  Bitcoin / Important Announcements / Re: World's First Bitcoin Lawsuit - Cartmell v. Bitcoinica on: February 05, 2013, 02:30:08 AM
UPDATE 25/Jan/2013:

Defendant Intersango escapes.  

Other Defendants' motion to dismiss for lack of personal jurisdiction is denied.  Case allowed to proceed against Bitcoinica, Patrick Strateman, Amir Taaki and Donald Norman in California.

Quote
"Moving Defendants have failed to show that New Zealand is a suitable forum."

"The language of the Terms and Conditions allegedly on the website, even if it is enforceable, states only that 'you [meaning a customer] agree to submit to settle any dispute,' not that customers could not file actions against Bitcoinica outside New Zealand."

"In addition, the fact of a liquidation proceeding against Bitcoinica in New Zealand is not sufficient grounds to stay this action."

https://docs.google.com/file/d/0B_ECG6JRZs-7SDBhU2ducWM5eEU/edit


800  Other / Meta / Re: Bryan Micon...CREEPY STALKER GUY on: February 05, 2013, 01:12:20 AM
I think it's pretty clear whose side of this debate the mods are taking:  the side of the company buying all the advertising here.  I can only imagine what else is going on between BFL and the mods here.
Yawn.
US MODS DON'T GET A SINGLE CENT FROM BFL, AND THE TROLLING INCURRED BY THEM IS A PITA.
Happy?
And yet he wonders why we don't consider any of his "investigative journalism" to be very credible. He simply makes things up. Micon, if you actually did real research everyone here would take you more seriously.

As for why some personal insults made by hardware makers aren't deleted, that's because we're often willing to keep those posts there as an example of why people shouldn't do business with them. We could care less about your response.

Of course, when their insults are excessive, they've already proven their point that they should be avoided at all costs, so that's when we'll start deleting their posts. Even worse, when that happens they risk getting banned from this forum. If you're a business, you don't want to mess with the moderators here.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 [40] 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 ... 158 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!