Bitcoin Forum
April 26, 2024, 05:21:48 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Considering someday build a computing grid of AVL merkle forest - SHA256 chips?  (Read 881 times)
BenRayfield (OP)
Sr. Member
****
Offline Offline

Activity: 316
Merit: 250



View Profile
March 10, 2015, 04:03:45 AM
 #1

First, here's the prototype of the immutable datastruct its based on:

Immutable Sparse Wave Trees (WaveTree)
Realtime bigdata tool for bit strings up to 2^63 based on AVL forest

https://sourceforge.net/projects/wavetree version 0.2.0 is a 84 kB jar file containing its own source code.

Opensource GNU LGPL 2+

Realtime bigdata tool at the bit level based on immutable AVL forest which can be run in memory or, in future versions, as a merkle forest like a blockchain. Main object is a sparse bit string (Bits) that efficiently scales up to 2^63 bits normally compressed as forest has duplicated substrings. Bits objects support reading bit, byte, short, int, or long (Java primitives) at any bit index in 64 bit range. Example: instead of building a class to hold a header and then data, represent all of that as Bits, subranges of them, and ints for sizes of its parts. Expansion ability for other kinds of compression, since Bits is a Java interface. Main functions on bits are substring, concat, number of 0 or 1 bits, and number of bits (size). All those operations can be done millions of times per second regardless of size because the AVL forest reuses existing branches recursively. Theres a scalar (originally for copy/pasting subranges of sounds) and a bit Java package. Sparse n dimensional matrix.

AVL tree balancing avoids deep and slow forest

Bits substring, concat, and count 1 bits in any subrange or combination costs only log time and memory (millions of times per second on average computer)

Versioning on N dimensional matrix object (Multidim) since its only a view of Bits object. I've tested this on 10000 images from MNIST OCR data.

Scalar and Bit versions - Originally was scalar for copy/paste subranges of sound. Same operations work for bit strings

Can store sounds that are years long since its sparse. Same works for bit strings up to 2^63.

-----

I too often get lost in excessive abstraction, but I'm getting back to keeping it real. This is 1 of my tools that I only came to understand the need for after years of research. It will be at the core of my game, AI, and science network, along with my new kind of mindmap and statistical tools like boltzmann machines and bayesian networks. All those things will be represented using this foundation of bits.

-------------

The possible computing grid:

These computers would not be like Von Neuman motherboards. They would not support mutable memory locations. They're more like lisp machines, except just a data layer, similar to bittorrent except more direct and for streaming near lightspeed (theres already many optical wires, or wireless mesh networks could also be a transport layer).

A memory address is, what I've thought of so far:
256 bits of SHA256, concat 64 bits of length (in bits), concat 64 bits of number of 1s, concat 8 bits of AVL forest height, concat 2 time headers.

The time headers are on any arbitrary scale, probably are 64 bit (or maybe 60 if want it to total 512?).
* minimum delete time
* maximum delete time
So every piece of data in the network is, by agreement of those in the network, required to be deleted by a certain time (in microseconds or nanoseconds, maybe) and is required to be near instantly available from whoever claims it exists (by it being referenced in a higher AVL forest node from them) until minimum delete time. Failure to put in a reasonable effort to do what you agreed (by rebroadcasting or first broadcasting a leaf), I recommend to those who think its a good strategy to make the network work overall, should be disconnected from anyone who discovers this about them, but dont tell anyone else in the network about it because then you'd have to consider incoming claims from many others that various nodes in the network did similar bad thing, and its much simpler in the gametheory of it to accept connections until you have personally seen the evidence of them not giving you what they agreed to (any part of any avlBitstring they sent you a merkle branch of). A balance should evolve of which data nodes will accept based on it having unreasonable min or max delete times, because if you accept it and rebroadcast you are responsible for routing until the min delete time and not routing after the max delete time. You can always hash it again with a different time, so competition should form.

All addresses are the hash of 2 child addresses, including length, 1s, forest height, and min and max time headers for this new branch.

You cant hack a memory space that doesnt support modification ever for any reason, unless SHA256 is cracked and in that case shouldnt have chosen that secureHash function. It makes no sense to create or delete a number, but if you ask for the mapped value (2 childs and times of parent) which you have never received (or not within time range claimed) data claiming it exists, it is no error to not get its value. You can always ask since others need not track what forest nodes you have, and in that case should be ignored with no penalty at first and gradually increasing penalty (of chance of being disconnected) with each repeated request for what was not offered (like noninvited port sniffing or nonimplied URL guessing). If somehow it is implied a data exists and is within time, then go ahead and ask for it without wasting the formality of someone hashing it into something you dont want, but this is not the main function of the network.

Such computers could be any ordinary computer running SHA256 by Java API which may in some implementations call to a GPU or do it in CPU, for example, or what I'm considering is a new kind of computer entirely which has many SHA256 chips, maybe redesigned as layers of NAND gates (as its constant number of steps), so each clock cycle can start a new SHA256 instead of waiting few hundred cycles for it to finish hashing 2 addresses which happens in most clock cycles.

Hash whenever 2 avlBitstring are concat or substring.

Read recursively into hash when you want to count the number of 1 bits in any range (in logBase2 time, not 2^64 cycles if it was a mutable memory, so huge optimization) or concat logBase2 times during substring.

These computers would need extreme cooling. How is the nuclear cooling tech progressing? Not to rotate any massenergy (in the data layer, but thats one of the things computers are for organizing) but to stop it from melting. For example, I have a theory that the thermodynamic saddlepoint of water, a few degrees above freezing where it gets bigger if either cools or heats might be able to find some parts which are heavier (the heavywater, with 1 proton and 1 neutron per hydrogen, and attached to oxygen). But I hope I dont have to go to such extremes. Some computing grids are embedded in metal walls in buildings on icebergs.

This computer would, other than its large address size, support the standard 64 bit integer and floating point math, but pointers work different. You dont write to a pointer. You substring and concat immutable avlBitstrings to include a pointer into itself. This is why the bitstrings can be up to 2^63 bits and all ops on them take logBase2 time and memory. I'm not sure if some ops do that logBase2 times, depending on large difference in tree height, but that can at least be optimized if it becomes an issue.

Think of it like anonymous Subversion for every piece of data and state change anywhere in the global network, and when you want to write a petabit avlBitstring, its potentially everywhere on Earth 1/23 of a second later, available but not cached until it starts being requested.

Prediction of what should be routed where is a layer on top of this, but in general routing should be by physical distance first since light doesnt make exceptions for trips to the internet backbone and back. Would you go to town and back to reach your refridgerator? You would if IPv4 doesnt give you a direct path there.

Target audiences: Internet2.edu superfast scientific grid consumers, video streaming which is currently done on patches on top of patches through bittorrent, high bandwidth massively multiplayer VR, static caching layer optimization, competition for Cassandra, Hadoop, Bigtable etc, supercollider data streams and you wouldnt believe how much data per nanosecond they generate since physics waits for noone, global voxel streams like is planned for live sports events to see from any angle even while pausing such waslive stream, memory mapping of neuromorphic, statistical, and wave based systems. Realtime high precision brain scans for networking minds together of those who choose it (which I cant wait for, and no I'm not the borg, opensource is by choice and therefore keeps the competition fair)... but if someone like the borg collective wants to join the network, I think they'll find it just barely compatible enough and annoying at having to hash when they would prefer more direct access to waves and statistics. Boltzmann machines are an important use case.

Its rediculous that a database would only let you make 1000 or so queries per second. Too much sync when everyone has to agree on the contents of that variable which we call that database. If you want to agree with many others on the value of some bitstring as a name, thats what we have blockchains, namecoin, and domain names for. If you want to download a pointer to a petabit and reliably count how many 1 bits are in a randomly chosen substring of it (which may even be over half its size) in a small fraction of a second, then you would use such an avlBitstring computing grid instead of a name system. Its so much simpler to push all the data by value, than to deal with the gametheory of who agrees on name value pairs.

What do you all think about the difficulty and volume prices for such chips and new kinds of computing grid, if I was to pursue this at a hardware level later? Some of you are specialists in SHA256 chips, but I'd want chips that has the whole concat of 2 childs and time headers (so even bigger than a single hash and padding). And then theres many way addressing to get the 2 childs of any address or find its not in the cache, and by cache I mean main memory. The difference between fast cache slower caches and Internet is just number of hops. Makes no difference if its in the same motherboard or across Internet. Its all 1 big computer to me, but some people like to say they're different things so they can regulate general computing without calling it a computer. I'm not trying to break laws, but keep that slow mess away from me.

1714108908
Hero Member
*
Offline Offline

Posts: 1714108908

View Profile Personal Message (Offline)

Ignore
1714108908
Reply with quote  #2

1714108908
Report to moderator
1714108908
Hero Member
*
Offline Offline

Posts: 1714108908

View Profile Personal Message (Offline)

Ignore
1714108908
Reply with quote  #2

1714108908
Report to moderator
1714108908
Hero Member
*
Offline Offline

Posts: 1714108908

View Profile Personal Message (Offline)

Ignore
1714108908
Reply with quote  #2

1714108908
Report to moderator
TalkImg was created especially for hosting images on bitcointalk.org: try it next time you want to post an image
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714108908
Hero Member
*
Offline Offline

Posts: 1714108908

View Profile Personal Message (Offline)

Ignore
1714108908
Reply with quote  #2

1714108908
Report to moderator
BenRayfield (OP)
Sr. Member
****
Offline Offline

Activity: 316
Merit: 250



View Profile
March 10, 2015, 07:12:02 AM
 #2

Human brain scale NPComplete cellular automata 3d chip design - Would this work?

Nand memory they say is only for storage, but nand logic is universal if run in a loop and would make a great FPGA (field programmable gate array) design. This design has no overlapping wires since they are all maximum distance 1 square vertical and 1 square horizontal.

I'm not sure how to turn on and off these many small connections, but I guess thats why FPGAs change their wiring shape so slowly (only thousands of times per second instead of billions like clock ticks). I'll do the same, rows and columns of printed wire at every 2d stacked layer, and leave some thin ventilation space between each 2 layers so the chip itself is a heatsink.

It alternates kinds of layers. One kind moves a bit 1 square up and to any of 4 sides, but only 0 or 1 or 2 of them and the other wires must be physically disconnected to save power leaking. The other kind of layer is NAND gates, 1 at every square in the 2d grid except the last row which is copied as is. Each NAND gate connects downward to 2 squares, 1 directly below and 1 to the side, always the same direction. The bits must come to the nands so the nands dont have to be rewired ever because 2 wires each is nearly twice as many squares as there are. As a cellular automata, the bits will get there moving over 1 square at a time, and compilers should optimize for that shape, nanding closer together when possible. Maybe there should be more bit move layers between the nand layers.

Like normal stream chips (GPU most commonly), many layers can be used at once since each only depends on the one before.

How small are integrated nand circuits and tiny switches? You can get 32 gigabyte flash memory for 20-50 dollars. Thats 256 billion bits stored, plus the paths to them. If we could compute with just a tiny fraction of that, it would be a computer parallel enough to simulate a Human brain (maybe 100 of those) for just a few thousand dollars if going by the same price. But I dont think it would be that easy, since heat is often a problem. Spread them out more, leave more ventilation room. But still its lots of electricity, like nearly a solid block of lightning unless its very spread out. Could it work in flowing water?

Every GPU and CPU can run on a loop of nands if it has enough layers and its enough nands in 2d layer, since all the info has to exist in a single layer, cant go back to get what was missed.

BenRayfield (OP)
Sr. Member
****
Offline Offline

Activity: 316
Merit: 250



View Profile
March 10, 2015, 08:01:05 AM
 #3

The integrated circuit has to go, in 3d chips. Its the cause of extreme heat in a small area. Its nature is to pack things in tight. We need to do the opposite, spread them far apart with tiny wires and run them slower. It will cost far more electricity, but it has the more important advantage of being the only kind that doesnt melt.

Think lego blocks. Each block will be maybe the size of a spec of dust or a grain of sand. Most of that space is empty, for coolant to flow through, either air or liquid, something nonconductive.

Each block has 5 tiny poles sticking down and 4 of them are also 1 square over. These are curved so they can cross eachother without touching. Each curved pole is either nonconductor or conductor, depending on if you want that wire to exist.

Of the copylayers, there are at most 16 ways 4 choices of 2 things can be, counting 4 possible angles even though they are duplicates, and counting those which have more than 2 conductors which I choose not to use for electric reasons.

In the nand layers, there is only 1 kind of block, conductor straight down and sideways down, with the other 3 as nonconductors.

Always 5 poles so they lay on eachother flat and precise in the grooves the poles fall into.

Gravity holds this stream chip together. Its not integrated. Its many blocks laying on eachother.

These could be dropped from something like a 3d printer which has a large bag of each kind, they fall through a hole that they can only fit while upside down and have to be slightly pushed while they hang by the tips of their 4 outer poles, then grab and flip it over, and move the arm to lay it on the chip being printed. When I say printed, I mean laying things on eachother with no chemistry. Easier than legos since you have to snap those together. These literally just lay on top of eachother.

With thousands of those tiny arms laying blocks 1 layer at a time, each arm doing a certain rectangle in the top layer in progress, a few bags of extremely cheap and simple blocks form into the compute part of a Human brain scale stream processor, as evolvable hardware, which is layed on top of a slower motherboard layer which has many RAM chips which each can only be read and written by a certain section of the bottom of the 3d chip. If they want to access another sections memory, they have to go around the 3d chip again to move the logic to it, use the memory, then around again to come back. Going around the 3d chip should be similar speed to ordinary cpus and gpus, many millions or billions of cycles per second plus multiple layers in use at once.

You could store a whole Human brain and run it in realtime on this, if we knew more about how brains work. We're getting there. This would solve the speed and size problems, or it may turn out it takes far less compute and memory to simulate a Human brain or there must be many better kinds of minds, but still we have 7 billion Human brains we dont want to lose the memories and way of thinking using those memories. Backups would be good, if we knew how to scan. You dont have to turn it on. Eventually doctors would figure out how to give you back parts of your mind that may be lost due to old age or injury. Or if you're like me, I dont care what material of dimensions I'm made of since its all the field. I care what thoughts happen in my mind, and computers are a great tool to expand minds as long as you dont let them expand their mind by copying over yours. Direction of copy is important.

But the large chips and excessive power needs do come at a price. Can I rent a fusion reactor to run this chip? Maybe it can be made small enough and still be mostly empty space.

BenRayfield (OP)
Sr. Member
****
Offline Offline

Activity: 316
Merit: 250



View Profile
March 10, 2015, 08:59:15 AM
 #4

Nevermind the 5 poles sticking down from each block. I'll go with corners alignment and make them all cubes with different kinds of surface depending where electricity should flow, and slightly rounded edges so the faces are more likely to push on eachother if its not a perfect cube. The bottom side has 4 smaller squares which are each nonconductor or conductor. And lets get rid of the copy layers. Everything is a nand, and it must choose any 2 of those 4. The top of each block is conductor for other blocks to lay their smaller conductors on. The chip is now square pyramid shaped with the top cut off since all the data has to pass through each individual layer all at once, including the top layer. Then somehow the top has to connect to the center of the bottom (wasting the nands hanging off the edge, but they're just to lay the others on). I'd like to hollow out the pyramid so it can wire that through its center, but then how can the blocks lay on the inner edge? If it was a flatdonut pyramid shape instead of square pyramid, the sides of blocks wouldnt touch but theres no conductors on the sides so thats ok. flatdonut pyramid it is. Then, or I should say before starting, lay another kind of blocks that act as pieces of wire and have conductors on some of their sides, tops, andOr bottoms. Cubes are easier to assemble than wires. Assemble using a machine that has a slot for each possible kind of cube and alignment. Now I imagine it more like a tiny very basic assembly line without robot arms, just poles pushing straight then sliding back, to push the blocks where they need to fall through a cube shaped hole just a little bigger than the cubes so they have to fall slowly and not far down. Add another block of layers between the wires wrapping around and the lowest nand layer, to read and write a specific standard memory stick or other device. Use enough layers that it has time in the worst case. The only thing missing is how to power the blocks. Sides arent being used. Thats where it has to go, since I dont want to drain energy by using a moving wave across them but that is something to explore later. Wrap a power layer around the cylinder, with some kind of way to choose which layers are powered at each moment and how much. Wrap some rubber bands around it (or a flat rubber sheet) to make sure the sides touch radially, and its possible to build a 3d nand stream chip of a few kinds of cubes without chemistry between the cubes. Phonebloks (which Google started leading) might be good attachments to these far smaller cubes.

cbeast
Donator
Legendary
*
Offline Offline

Activity: 1736
Merit: 1006

Let's talk governance, lipstick, and pigs.


View Profile
March 10, 2015, 09:09:51 AM
 #5

I too often get lost in excessive abstraction, but I'm getting back to keeping it real.
If I tried this, would I end up floating in a wormhole?  Grin

Any significantly advanced cryptocurrency is indistinguishable from Ponzi Tulips.
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!