jl777 (OP)
Legendary
Offline
Activity: 1176
Merit: 1134
|
|
September 21, 2014, 01:09:27 AM |
|
With less than half a day left, it seems quite unlikely to sell the 3% today, so it is looking more and more like end of Monday will be the end of TOKEN sales. exact time will be posted when we know for sure
James
|
|
|
|
kufan
|
|
September 21, 2014, 02:10:16 AM |
|
With less than half a day left, it seems quite unlikely to sell the 3% today, so it is looking more and more like end of Monday will be the end of TOKEN sales. exact time will be posted when we know for sure
James
That is enough. it has been a long time.
|
|
|
|
m30188
Newbie
Offline
Activity: 40
Merit: 0
|
|
September 21, 2014, 03:46:37 AM |
|
With less than half a day left, it seems quite unlikely to sell the 3% today, so it is looking more and more like end of Monday will be the end of TOKEN sales. exact time will be posted when we know for sure
James
That is enough. it has been a long time. Two weeks in cryto-world time is an eternity.
|
|
|
|
|
jl777 (OP)
Legendary
Offline
Activity: 1176
Merit: 1134
|
|
September 21, 2014, 04:11:21 AM |
|
Here is part of a thread discussing the ongoing code review in the NXT forum: Is there anybody available to code review the SuperNET routing code? It is currently all in a single file of 666 lines! spooky It is doing the onion routing and tokenizing and hopefully properly routing without leaking any private info or exploding the packet traffic James James, I'd love to. Is it checked into a github? if so, which one? https://github.com/jl777/libjl777packets.h and udp.h are the key Warning: Technical information about the SuperNET As suggested by jl777, I am posting the workings of onion routing and potential pitfalls it could have. Onion routing is a method for sending data from one place to another by bouncing the data across multiple servers with multiple layers of encryption. Process: 1. the data to send is encrypted on multiple levels by the client 2. the client sends the encrypted packet to the first of many PrivacyServers 3. the privacy server does checks on the packet 4. the server then decrypts the packet one level 5. the server checks if the packet is now completely decrypted and meant for this location 6. If it still has layers of encrytion to go, it locates a new privacy server to send to and repeats the process 7. once the packet reaches the end user, the data is available, seemingly untractably and anonymously Faults to work out: - the packets could be monitored for size and followed by tracing the unique size of the packet through the network A) ~ mitigated by either padding the values to max or adding a random salt to the packet at each level - the packets could be followed by the timing of each hop B) ~ mitigated by adding random wait times on the system - somehow having a code fault that allows access to previous senders or public keys for other levels or privacy servers C) ~ further code analysis for errors If anyone wants to help find flaws to iron out, your help is appreciated, otherwise this is just a rundown of what supernet can do A) I plan to have a higher privacy level to pad out all packets. Since this adds network bandwidth load, I think it is best to have a "cost", which will just be the requirement of TBD amount of BTCD to be in the user's acct B) I think assumes the attacker is able to monitor all packet traffic, but the attacker would only be able to decrypt packets that it is routing (or receiving). Unless the traffic level is so low that individual packets can be traced globally, it will be hard to correlate the packets even for a single transmission. this being said, a random delay is a good idea, but probably not such a large range. It is important to note that there are two types of nodes, the public servers that publish their IP addresses and pubkey and the private nodes that only communicate to the public servers. The part I need most help is to see if the probabilistic routing will truly shield a private node's IP address from the public servers, and of course whether it will be able to successfully route most of the time. My feeling is that based on network topology different fanout levels need to be used in addition to some integration of historical probabilities. When the network is relatively small, we an err on the side of larger values, but at the larger scale it will be important to be as efficient as possible. One approach I am thinking of is to just use an adaptively adjusted fanout factor, eg. shrink it if it worked, expand it if it didnt. this would end up jittering around the optimum level and once there is enough history to add a bit of headroom to the critical value. It would be nice if somebody could model this James
|
|
|
|
stereotype
Legendary
Offline
Activity: 1554
Merit: 1000
|
|
September 21, 2014, 08:09:29 AM |
|
My feeling is that based on network topology different fanout levels need to be used in addition to some integration of historical probabilities. When the network is relatively small, we an err on the side of larger values, but at the larger scale it will be important to be as efficient as possible. One approach I am thinking of is to just use an adaptively adjusted fanout factor, eg. shrink it if it worked, expand it if it didnt. this would end up jittering around the optimum level and once there is enough history to add a bit of headroom to the critical value.
Errr, whatever you say James. Whatever you say!
|
|
|
|
jl777 (OP)
Legendary
Offline
Activity: 1176
Merit: 1134
|
|
September 21, 2014, 08:21:39 AM |
|
My feeling is that based on network topology different fanout levels need to be used in addition to some integration of historical probabilities. When the network is relatively small, we an err on the side of larger values, but at the larger scale it will be important to be as efficient as possible. One approach I am thinking of is to just use an adaptively adjusted fanout factor, eg. shrink it if it worked, expand it if it didnt. this would end up jittering around the optimum level and once there is enough history to add a bit of headroom to the critical value.
Errr, whatever you say James. Whatever you say! I was trying to say doing something like a robot control circuit using feedback to track along a line, but of course in a different dimension James
|
|
|
|
Cassius
Legendary
Offline
Activity: 1764
Merit: 1031
|
|
September 21, 2014, 08:43:58 AM |
|
What do you define as a large network? And what would efficient mean for a network that size?
|
|
|
|
jl777 (OP)
Legendary
Offline
Activity: 1176
Merit: 1134
|
|
September 21, 2014, 09:01:25 AM |
|
What do you define as a large network? And what would efficient mean for a network that size?
100,000+ nodes efficient would mean close to optimal redundancy still need to work out what sort of probability distribution there will be so I am not sure what "optimal" is yet
|
|
|
|
Cassius
Legendary
Offline
Activity: 1764
Merit: 1031
|
|
September 21, 2014, 09:11:24 AM |
|
What do you define as a large network? And what would efficient mean for a network that size?
100,000+ nodes efficient would mean close to optimal redundancy still need to work out what sort of probability distribution there will be so I am not sure what "optimal" is yet How many public/private nodes?
|
|
|
|
jl777 (OP)
Legendary
Offline
Activity: 1176
Merit: 1134
|
|
September 21, 2014, 09:17:04 AM |
|
What do you define as a large network? And what would efficient mean for a network that size?
100,000+ nodes efficient would mean close to optimal redundancy still need to work out what sort of probability distribution there will be so I am not sure what "optimal" is yet How many public/private nodes? this depends on the users. I am counting each computer as a node and most will probably have a loopback privacy server along with the private node, then the 100+ public privacyservers, so that is 1000 clients for the 100+ privacyservers. sounds like a lot, but UDP processing will be fast and network packets are so slow compared to CPU, I dont think it will be an issue. RAM will probably be the limiting factor, but even if things have to swap to HDD it could be fast enough
|
|
|
|
Porte
|
|
September 21, 2014, 10:49:14 AM |
|
Long Live James!
Enough with that. You sound ridiculous. I agree, please stop that. From reading James and capturing his down-to-earth personality, I do not think he enjoys "NXTzy" slogans glorifying his persona. Thanks and regards, Rodrigo Hypocrisy is easy to find in others but hard to see in oneself. And talking on behalf of others is one of those. None of us are perfect. The best we can do is to accept that we will find flaws in others and appreciate it when other find our own flaws We should all work to eliminate those flaws as they are found. But not everyone is passionate about self improvement. I strongly believe in protecting myself and self defense. . I no longer attack back when someone offends me. We function in duality whether we realize it or not. We are utterly different people to different people, and we don't even know it. A good friend taught me about ‘’Silence’’; Do not retaliate to people offending you immediately, instead, patiently reflect and wait, write down your own human thoughts on what you want to say to the one who offended you; then in later, read what you wrote, and if you are content; let the person know what you had put down. And my only reason to say Long Live James was in an affectionate way. Reminds me of https://en.wikipedia.org/wiki/Nonviolent_CommunicationNonviolent Communication (abbreviated NVC, also called Compassionate Communication or Collaborative Communication[1][2]) is a communication process developed by Marshall Rosenberg beginning in the 1960s.[3] It focuses on three aspects of communication: self-empathy (defined as a deep and compassionate awareness of one's own inner experience), empathy (defined as listening to another with deep compassion), and honest self-expression (defined as expressing oneself authentically in a way that is likely to inspire compassion in others).
NVC is based on the idea that all human beings have the capacity for compassion and only resort to violence or behavior that harms others when they don't recognize more effective strategies for meeting needs.[4] Habits of thinking and speaking that lead to the use of violence (psychological and physical) are learned through culture. NVC theory supposes all human behavior stems from attempts to meet universal human needs and that these needs are never in conflict. Rather, conflict arises when strategies for meeting needs clash. NVC proposes that if people can identify their needs, the needs of others, and the feelings that surround these needs, harmony can be achieved.[5]
While NVC is ostensibly taught as a process of communication designed to improve compassionate connection to others, it has also been interpreted as a spiritual practice, a set of values, a parenting technique, an educational method and a worldview. Ps: I dont like it when a tiny number of loud people (compared to dozens or hundreds who dont say anything) are the reason for drastic reactions. Just ignore them as they can ignore you. + 1 Sad to see you leaving CECVW, You were a funny guy. You only mistake was to be different. Anyway moving on, Tomorrow when Supernetwork is launched what will happen? I mean we will expect a high peak in the value of Supernetwork? Will there be an announcement of any kind, marketing, new coins added etc?
|
|
|
|
jl777 (OP)
Legendary
Offline
Activity: 1176
Merit: 1134
|
|
September 21, 2014, 11:05:50 AM |
|
Anyway moving on, Tomorrow when Supernetwork is launched what will happen? I mean we will expect a high peak in the value of Supernetwork? Will there be an announcement of any kind, marketing, new coins added etc?
the funding will continue for 2 more days after that there will be a voting day and also to make sure everybody takes down their bids for TOKEN then the SuperNET asset will be issued and dividended out if you have TOKEN at bter, the SuperNET asset will be called UNITY and automatically replace the TOKEN in your account In NXT you will end up with both SuperNET asset and TOKEN, but after the dividend the TOKEN will have no value other than whatever collector value it ends up with trading starts then I can go back to 75%+ of my time coding to complete the tech maybe I will take a day off It is a mystery to me what will happen to the price. This I cannot control. We will just work hard to increase the value and the price will follow the value James
|
|
|
|
PondSea
Legendary
Offline
Activity: 1428
Merit: 1000
|
|
September 21, 2014, 11:10:27 AM |
|
James for the guys that purchased on Bter, could we get the tokens on the NXT asset exchange as well? collecting stuff is half the fun
|
|
|
|
jl777 (OP)
Legendary
Offline
Activity: 1176
Merit: 1134
|
|
September 21, 2014, 11:11:38 AM |
|
James for the guys that purchased on Bter, could we get the tokens on the NXT asset exchange as well? collecting stuff is half the fun you will have to wait for UNITY to appear in your bter account and then withdraw it to your NXT account. SuperNET asset will arrive
|
|
|
|
valarmg
|
|
September 21, 2014, 11:15:53 AM |
|
the funding will continue for 2 more days
Could you name the exact end time/day when the ICO finishes.
|
|
|
|
From Above
|
|
September 21, 2014, 12:33:11 PM |
|
whats gonna happen after dem ICO ended?
~CfA~
|
|
|
|
Cassius
Legendary
Offline
Activity: 1764
Merit: 1031
|
|
September 21, 2014, 01:28:49 PM |
|
What do you define as a large network? And what would efficient mean for a network that size?
100,000+ nodes efficient would mean close to optimal redundancy still need to work out what sort of probability distribution there will be so I am not sure what "optimal" is yet How many public/private nodes? this depends on the users. I am counting each computer as a node and most will probably have a loopback privacy server along with the private node, then the 100+ public privacyservers, so that is 1000 clients for the 100+ privacyservers. sounds like a lot, but UDP processing will be fast and network packets are so slow compared to CPU, I dont think it will be an issue. RAM will probably be the limiting factor, but even if things have to swap to HDD it could be fast enough I'm a little lost now I was originally thinking there would be ~100,000 'ordinary' users, then a much smaller number of public privacyservers (100); a user sends a packet, which bounces around between the 100 privacyservers for a while (1-3 onion skins), then 'exits' the privacyserver subset of the network to another user. This may be an oversimplification, or just plain wrong, especially as I'm not sure what a loopback privacyserver is. Assuming I have understood properly, then although there are a LOT of users, there aren't many privacyservers in comparison - 1000:1. And the optimisation needs to occur only within those 100? I guess I was just wondering whether there was some kind of shortcut, as 100 isn't that big a number, but that depends on what acceptable traffic is, and what sort of fanout is considered acceptable. Just trying to understand better. This is what happens when you ask 'anyone' for feedback.
|
|
|
|
jeezy
Legendary
Offline
Activity: 1237
Merit: 1010
|
|
September 21, 2014, 01:30:06 PM |
|
What do you define as a large network? And what would efficient mean for a network that size?
100,000+ nodes efficient would mean close to optimal redundancy still need to work out what sort of probability distribution there will be so I am not sure what "optimal" is yet Even Bitcoin only has ~7300 reachable nodes. I doubt that we will see above 1000. At least in the first year or before massive hype.
|
|
|
|
CRServers
Newbie
Offline
Activity: 49
Merit: 0
|
|
September 21, 2014, 02:52:47 PM |
|
100,000+ nodes
Even Bitcoin only has ~7300 reachable nodes. I doubt that we will see above 1000. At least in the first year or before massive hype. From my understanding that is the optimal number, not the initial number of nodes. Regards,
|
|
|
|
|