Bitcoin Forum
May 09, 2024, 07:22:20 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 [37] 38 39 40 41 42 43 44 45 46 47 »
  Print  
Author Topic: Do you think "iamnotback" really has the" Bitcoin killer"?  (Read 79918 times)
iamnotback
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
April 05, 2017, 11:47:02 AM
 #721

The tree can be serialized to binary format and text json. Both.

The node has both p2p protocol in binary data, and rpc interface in json. Working on the same objects.

Okay so you support APIs either binary compatibility or JSON.

The type definition will always escape the C compiler comprehension, but can use typedef alias on the reference pointer to help.

Why can't you translate these JSON objects to binary data structures in C? Which data structures can't a C struct model?

It would be much more elegant and typed to access fields of data structures with the normal dot notation instead of the boilerplate of library calls.

The articulation in higher level wont be made in C. It can be distributed in pure binary form for linux and windows, module with new rpc interface can be added to the node without recompiling.

The relevant part of the api for app programmers is not there, but in the javascript code. With the rpc/json api.

Articulation is a strange word to use there. I think what you mean to say that is that the high-level coding will be done in JavaScript and calls can be made to C modules which can be loaded at run-time.
1715282540
Hero Member
*
Offline Offline

Posts: 1715282540

View Profile Personal Message (Offline)

Ignore
1715282540
Reply with quote  #2

1715282540
Report to moderator
1715282540
Hero Member
*
Offline Offline

Posts: 1715282540

View Profile Personal Message (Offline)

Ignore
1715282540
Reply with quote  #2

1715282540
Report to moderator
1715282540
Hero Member
*
Offline Offline

Posts: 1715282540

View Profile Personal Message (Offline)

Ignore
1715282540
Reply with quote  #2

1715282540
Report to moderator
Be very wary of relying on JavaScript for security on crypto sites. The site can change the JavaScript at any time unless you take unusual precautions, and browsers are not generally known for their airtight security.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715282540
Hero Member
*
Offline Offline

Posts: 1715282540

View Profile Personal Message (Offline)

Ignore
1715282540
Reply with quote  #2

1715282540
Report to moderator
1715282540
Hero Member
*
Offline Offline

Posts: 1715282540

View Profile Personal Message (Offline)

Ignore
1715282540
Reply with quote  #2

1715282540
Report to moderator
IadixDev
Full Member
***
Offline Offline

Activity: 322
Merit: 151


They're tactical


View Profile WWW
April 05, 2017, 12:01:12 PM
 #722

The tree can be serialized to binary format and text json. Both.

The node has both p2p protocol in binary data, and rpc interface in json. Working on the same objects.

Okay so you support APIs either binary compatibility or JSON.

The type definition will always escape the C compiler comprehension, but can use typedef alias on the reference pointer to help.

Why can't you translate these JSON objects to binary data structures in C? Which data structures can't a C struct model?

It would be much more elegant and typed to access fields of data structures with the normal dot notation instead of the boilerplate of library calls.

The articulation in higher level wont be made in C. It can be distributed in pure binary form for linux and windows, module with new rpc interface can be added to the node without recompiling.

The relevant part of the api for app programmers is not there, but in the javascript code. With the rpc/json api.

Articulation is a strange word to use there. I think what you mean to say that is that the high-level coding will be done in JavaScript and calls can be made to C modules which can be loaded at run-time.


There can be a way to pack binary structure in a single variable in binary form. I do this for der signature.

When there is complex hierarchy, and the types need to be kept across layers that transport it ( eg event loops to deal with event with different members type), it can be useful to have meta type objects like this.

It can transport safely complex data structure with a generic reference pointer type. And the object can be both serialized to binary data and hashed, or to textual json for rpc/json & js api.


Yes basically it's the idea to have binary modules plugin to implement rpc interface, the rpc method is the exported symbol name, the class is the module name, and params are passed with the tree structure. ( could add typedef alias for basic C compiler check).

IadixDev
Full Member
***
Offline Offline

Activity: 322
Merit: 151


They're tactical


View Profile WWW
April 05, 2017, 12:13:07 PM
Last edit: April 05, 2017, 12:54:21 PM by IadixDev
 #723

The C compiler can only understand the "leaf" of the object, the var who can be represented by native C type. ( int, float, strings, pointers, array etc), for the object tree it doesnt understand the type, to the C compilers, all objects are a reference pointer.

There is static typing internally for objects with built in types.

Or can use "object template" close to type class to instanciate an object based on compile time structure, using those built in type as static type definition in a json like format.

The C compiler wont understand anything beyond its basic native types, in the leaf of the objects.

Objects can have static type associated to their reference , and a name.

It's the interest over C structures lists. Members can be referenced by name and type based on runtime input. Close to functional programing.

And structures can be manipulated by same functions either they are un serialized from binary data, or constructed from a json objects, or constructed manually in C based on C compiler binary type.

But different compilers can have different binary packing for struct members.

In the absolute nothing prevent to use C struct, but need specfific glue code on both side to unserialize them ( compiler can have weird binary packing of structure ) and turn them to json.

With this you have instanciation based on type class like format, serialization to binary data and conversion to text json automatically, with reference counter, arrays, lists, collections, thread safety, dynamic linking at binary level, interface implemented in cross compiler/system executable binary, without glue code specific to each structure.

Static object types can be defined based on either built in structure/objects, or native C compiler types, and arrays/lists of those.


IadixDev
Full Member
***
Offline Offline

Activity: 322
Merit: 151


They're tactical


View Profile WWW
April 05, 2017, 01:47:43 PM
Last edit: April 05, 2017, 02:07:58 PM by IadixDev
 #724

From your discussion on the git

https://gist.github.com/shelby3/19ecb56e16f159096d9c178a4b4cd8fd

Quote
One example is that C can't stop you from accidentally reading from unintialized memory and writing+reading past the end of allocated memory block, thus opening your code to innumerable security holes. You'll never know if you've caught all these bugs which become Zeroday exploits, because all your code is written with grains of sand detail. Rather low-level code should only be used for the 5 - 20% of the code that needs to be finely tuned by an expert. High-level programming languages result in much faster programs most of the time in the real world scenario (not benchmarks contests). High-level programs are less verbose, provide more consistency of structure and performance, and enable less opaque expression of semantics (i.e. the program's logic and purpose). C obscures the actual sematics in loads of unnecessary low-level details as such as manually managing pointers, inability to express and abstract over high-level data structures such as collections and functional composition.

Quote
Not fault, but rather reading the wrong data injected by the hacker.

Hacker is injecting some data into memory that you read by accident; because he observes one of your arrays sometimes precedes a certain string that you store in memory. For example, you receive some input string from the Internet, store that on the heap, then with certain random probability that string ends up being read from due to a bug in your code that reads from an array on the heap. In this way, the hacker has injected some data where you did not expect it. This is one way how Zeroday exploits are made.


It's why it's better to use the object tree structure than C structure directly.

And C structure can be dicey with cross compiling and binary network protocol.

Creating json string from object tree structure is not so trivial with C either. (stdio, vargs , strcpy, strtod etc). Even with c++ and boost spirit, it's far from being extremely limpid either.

With the object tree system, it avoid bad memory access like this, among other things.

Even when the caller is blind to the data that the function use. ( for the c compiler it's a reference pointer)


But again nothing prevent to declare and use C structure, but i avoid them as much as possible with the API.

The only exception in the non kernel code i think is the string structure, but it has 3 members and is very generic.

Otherwise i avoid to use binary C structure as argument in function call that need to be potentially cross compiler/network safe.

Strings are packed internally with the bitcore variable string format for easy serialization to the binary p2p protocol.

Like this

say bye to zeroday exploit, bad memory access, etc etc etc.

say hi to cross compiler binary compatibility, cross language interfaces, safe and secure distributed application etc etc.


I'm sure you see where i want to get at Cheesy Cheesy

IadixDev
Full Member
***
Offline Offline

Activity: 322
Merit: 151


They're tactical


View Profile WWW
April 05, 2017, 02:35:23 PM
 #725

But I think you really over estimate the relevance of compile time type checking in the context of cross language interface and distributed application.

Even in the best case, with 100% homogenous language for all app , modules and all layer down to kernel level,  the compiler can only check his side of the application.

Nothing say all part will be compiled with the same compiler, or using the same language at all.

And it seem impossible to have all the part from binary data protocol and crypto, cross language interface definition, to high level OO programing, for distributed application in a single high level language.

For me it's impossible.

Need to make compromise somewhere.

The compromise is the glue code between low & high level is full of boilerplate, but I dont see better way to do this.

It remain still useable on both end from low level/kernel/binary data, to high level OO, network protocol, rpc, and the data representation remain consistent in all part.

It should be easy to program simple OO/GC like languages, or having high level definition of binary network protocol, with a synthax similar to js, or almost compatible with js engine, but with other built object available through module api/interface call.

The module abstraction replace the javascript DOM objects definition, and is a much simpler interface.

Or its possible to simulate the interface of the http object from the browser to have more browser-js compatible code for distributed request.

IadixDev
Full Member
***
Offline Offline

Activity: 322
Merit: 151


They're tactical


View Profile WWW
April 06, 2017, 09:11:13 AM
 #726

The core holistic concept i wanted to get at originally was to get to "loppless programs". And maybe you will understand why for me compile time check is mostly irrelevant.


Because every programmer will consider Turing law as some kind of holy bible, but in truth it's really useful in the paradigm of program to make scientific computation.

Turing model is for program who start with initial parameters, do their loop, and finish with the computed result.

Once you throw irq ( usb event, key/mouse input, network events, hard drive event ) , or threads, it's not a Turing machine anymore. States can change outside of the influence of the program

And most application today they are not follow this model at all.

Most application, either they are server, or UI, have mostly empty main loop, but is programmed in way to have handlers for certain events, and mostly for UI app the main loop is mostly pumping and dispatching events from different sources. In wepapp, there is no 'main loop' ( it's in the browser or js engine).

In any case, the whole code path is never determined at compile time.

There is no predefined code path that the compiler can follow.

All the program is made as a collection of class and module to handle different kind of events, either they come from the hardware or another thread.

Most of the program flow is not function calling each others in a precompiled order, but integrating component on top of an event processing layer, and the functions will be called by programs outside of the compiler scope.

It's even more relevant in the context of distributed application, and application server.

Most of the high level function will not be called by the node itself. The parameters with which they will be called will not be determined by the code of the node itself, and that's the only thing the compiler can see.

In this event programming model, it's really more about high level component definition, which can be manipulated by generic low level function to match certain event or data type, rather than having the whole code flow and function call hierarchy being determined at compile time.

In this context, the amount of thing for which the compiler can be useful is very low. And this would be true for any language used, either it's java with tomcat, or c++ to program server/UI, or php.

It doesn't matter the language used, it cannot determine most of the relevant code path at compile time, or check the client side of the interface. It doesn't even really make sense to try to figure that out at compile time.

Most of the code is event handler definition, but there is no full OO code path or call hierarchy.


Blockchain node fit in the case too, they don't have much big main loop, they mostly react to network event, either from the P2P network, or the rpc interface. The qt wallet UI is same, no big main loop, only event handler.



And my goal is to get to this kind of programming paradigm, where application are not programmed as a single block where all code path can be determined at compile time, but as collection of components implementing certain interface or event handlers, and then having low level scheduler to format and dispatch those event handling in different threads or module or nodes.

The scheduler doesn't have to know anything specific about the high level type of any module, and modules don't have to know anything about lower layer or other modules it doesn't use.

The compiler can't check anything, because it use generic type to represent the object, but it doesn't matter because it wouldn't be able to determine the call hierarchy or code path, because there is no call hierarchy in the program being compiled.

It just dispatch generic event, to generic components/module to handle them.

In this paradigm, programming an application is mostly like java applet or ActiveX , it doesn't contain a main loop, and it's more about programming routine to handle events, or processing some data, rather than based on predetermined code path.

Most of the code defined will not be called from inside of the component, but by other programs/applications, and you know nothing about them, how they might represent data or objects internally, or how they might do function call passing those object and data. The compiler can't have a clue about anything of it. Neither about the high level code path, or global application logic.

It doesn't have to, and shouldn't. The application do not follow the Turing model anyway, and the high level function of C/C++ compiler are at best gadget, at worst it can force to complexify the code or add level of abstraction where it doesn't really matter, because it still can't check what is relevant in the application execution flow. It can just check the code flow of the abstraction layer, which is mostly glue code and doesn't matter that much in the big picture.


iamnotback
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
April 07, 2017, 03:36:20 AM
Last edit: April 07, 2017, 05:11:03 AM by iamnotback
 #727

One of the reasons I am creating my project...


Who else is tired of this shit?
iamnotback
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
April 07, 2017, 06:56:23 AM
Last edit: April 07, 2017, 07:59:04 AM by iamnotback
 #728

I wrote something in private that I wanted to share, so I decided to stick it here.

Quote from: iamnotback
To win against Bitcoin, an altcoin must have something other (and more significant) than just speculation. Bitcoin will always parasite on all speculation in the overall blockchain speculation ecosystem even its marketcap is less than 50% of the ecosystem (as long as it is the largest).

Evidence:

as a trader i like the mess and the drama  Kiss



i've been told since forever that in order for something to have value, you need to exchange it for other value

...

yes you may have a breakthrough and answer for those "centralized by fiat power" consensus system,
but will the value of this new coin trully make the developer/early adopters insanely wealthy?
if anyone could make that coin basicly without exchanging what they are currently having, what would become the coin value other than early adopters initial speculations?

You could exchange your knowledge for tokens.



How do you justify the blatant BS linked above (your quote of me) wherein the Chinaman deliberately says some BS to crash the LTC price?

thats one tool in a pool of many. seen Americans (famous Bitcoin is a failed experiment line), Russians, Japanese all do the same bitcoin/crypto over its history.

That is why we need an altcoin wherein the hodlers aren't speculating. They are using the tokens for something in which they have no desire to speculate with them.

With that wide base of transaction use which doesn't care about the exchange value, the manipulators will not be able to have much impact.

As I said, I have some ideas about how to make whales impotent. Traders won't like it, but long-term HODLers are going to love it, because it is deflationary, which is even better than Bitcoin (i.e. the coin supply will shrink forever never reaching 0).
iamnotback
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
April 07, 2017, 09:27:28 AM
 #729

I hope you don't disappear. And I hope I can show you something in code asap that makes you interested.

And maybe you will understand why for me compile time check is mostly irrelevant.

Rather I would say compile-time checks are important especially for small details, but we can't possibly type every semantic due to unbounded semantics.

Because every programmer will consider Turing law as some kind of holy bible, but in truth it's really useful in the paradigm of program to make scientific computation.

Turing model is for program who start with initial parameters, do their loop, and finish with the computed result.

Once you throw irq ( usb event, key/mouse input, network events, hard drive event ) , or threads, it's not a Turing machine anymore. States can change outside of the influence of the program

This is unbounded nondeterminism.

Most of the program flow is not function calling each others in a precompiled order, but integrating component on top of an event processing layer, and the functions will be called by programs outside of the compiler scope.

It's even more relevant in the context of distributed application, and application server.

Most of the high level function will not be called by the node itself. The parameters with which they will be called will not be determined by the code of the node itself, and that's the only thing the compiler can see.

That is only fundamentally incompatible with compile-time (i.e. static) typing in the sense of an exponential explosion of types in type signatures.

It doesn't matter the language used, it cannot determine most of the relevant code path at compile time, or check the client side of the interface. It doesn't even really make sense to try to figure that out at compile time.

I don't think anyone was proposing dependent typing.

Any way @iadix, I must say that I better expend some time on developing my stuff and not talking about all this theory. I was already talking about all this theory for months with @keean. I don't want to repeat it all again now.

Let me go try to do some coding right now today. And let's see how soon I could show something in code you could respond to.

We can then trade ideas on specific coding improvements, instead of this abstract discussion.

I understand what you want, and I essentially want same. We just have perhaps a different idea about the exact form and priorities but let's see how close we are to agreement once I have something concrete in code to discuss.

Most of the code defined will not be called from inside of the component, but by other programs/applications, and you know nothing about them, how they might represent data or objects internally, or how they might do function call passing those object and data. The compiler can't have a clue about anything of it. Neither about the high level code path, or global application logic.

It doesn't have to, and shouldn't. The application do not follow the Turing model anyway, and the high level function of C/C++ compiler are at best gadget, at worst it can force to complexify the code or add level of abstraction where it doesn't really matter, because it still can't check what is relevant in the application execution flow. It can just check the code flow of the abstraction layer, which is mostly glue code and doesn't matter that much in the big picture.

I think you may not be fully versed on typeclasses. The are not OOP classes. @keean and I had long discussions about how these apply to genericity and callbacks.

Please allow me  some time to try to show something. This talk about abstracts or your framework, will just slow me down.


Please two or three sentence replies only. We really need to be more concise right now.
IadixDev
Full Member
***
Offline Offline

Activity: 322
Merit: 151


They're tactical


View Profile WWW
April 07, 2017, 11:08:10 AM
Last edit: April 07, 2017, 11:56:20 AM by IadixDev
 #730

Oki Smiley Believe or not i'm already trying to be concise  Grin Grin


Well just a last thing i wanted to mention with struct, but i know you already know most of this stuff, but it's just because you asked the reason why it's better than structures etc Smiley

But if i have to really explain all my hollistic reasoning, it would take a whole book, because it's system that is originally bootable and can run on bare metal, so there are many aspect i designed coming from ActiveX and web browser, and problematics of video streaming, interfacing, data format, p2p web 2.0, javascript, DOM etc Smiley



But the really first motivation for doing the tree thing instead of C structure, is not to have to bother about pointer ownsership and data translation.


Consider this

Code:

typedef struct image
{
   unsigned char *data;
  int width;
  int height;

}

typedef struct event
{
    event type;
    void *data;

}

image *global_list[16];


function handle_image(image *myimage)
{
  if(myimage->with<256)
     global_list[n]=myimage;
}


function event_handler(event *myevent)
{
image *myimage;


if(myevent->type==image_event)
 handle_image((image *)myevent->image);

}





Well simple case, but the question is do you need to free the event data after the call to the event_handler or not.

Without reference counter you can't know.

If you free it, you might end with an unallocated pointer in the global_image_list, or worst, a pointer to something else, without having anyway to detect this.

If you don't free it and it's not copied, you end up with a memory leak ( allocated memory without any valid pointer to it in the program data).  

Same if you need to reallocate the pointer.


The main motivation is this, so you can pass generic reference between different modules and functions, they can copy a reference of it, with lockless algorithm for reference counting so references can be shared between different threads.

Originally i just developed the json parser to be able to construct complex object hierarchy from one line and have a more 'atomic feeling' in the object construction who succeed or fail as a whole. And to have a sense of internal 'static object typing'.

And it's a very critical feature for my paradigm, because i want the design pattern to be based on asynchronous event rather than on direct function/method calls, and there is no explicit code path the compiler can easily check.

This is a must to have a single thread processing of events emitted by asynchronous sources, and to have the 'green threaded' feeling on high level, even with C, and for most simple case the green thread can keep locklessly synchronized with other heavy threads or interrupts.





And the other motivation is with interfacing and data serialization / jsonification , binary compatibility etc.


If you start from C struct, you need 4 functions around each structure you want to serialize or share, 1 to serialize, 1 to deserialize, 1 to jsoninify, 1 to dejonisfy.

With code spread in many different part, away from the 'hot code' where the real work on the data take place.

With this system, you can just add a member to the structure in the 'hot code', in the code that actually produce the data, and then it will be automatically serialized or jsonified with the new member.  Without you have a single line of code to change anywhere else. Even if that involve some boilerblate in the 'hot code'.

Even if this policy with rpc mean that caller define data format of the input, and callee the data format of the output, even if they can be in different language. But it seem more logical and more efficient from a programmer perspective.



But after well it's still C code, so you can use all the dirty C stuff, binary packed structures, cheap cast, inline assembler, stack manipulation, and screw up everything and end with 0day exploits every day.

Or you can use the boilerplate and having no buffer overflow, automatic conversion to/from json and binary, lockless reference counting, and multi thread safety on manipulation of array and lists.



Most of the time, cpu cache and instruction pipelining will do a much better job at dealing with problem either it's concurrent access, or caching, with instruction re odering, most modern cpu have complex instruction pipeline who can do the job at runtime much better than any compiler, and the last generation cpu and motherboard / north bridge/ south bridge etc have good handling of SMP with data shared between core, caches, and north-south bridges, with atomic operations etc.


My dream would have to get to green thread in C from bare metal interupt Smiley And i'm not too far actually lol


I hope it's not too long Smiley


I'm trying to be succint, i think it's the last very long thing i need to explain for the moment anyway Smiley


And i know you already know most of the stuff and you already discussed them for month with sean Smiley

But its to explain my side Cheesy


I started to code on the script parser too, i'll try to get something working in some days Smiley

IadixDev
Full Member
***
Offline Offline

Activity: 322
Merit: 151


They're tactical


View Profile WWW
April 07, 2017, 12:24:00 PM
 #731

I hope you don't disappear. And I hope I can show you something in code asap that makes you interested.

And maybe you will understand why for me compile time check is mostly irrelevant.

I shouldn't disappear normally, but never know these days  Shocked

Oki for code example Smiley

Rather I would say compile-time checks are important especially for small details, but we can't possibly type every semantic due to unbounded semantics.

Because every programmer will consider Turing law as some kind of holy bible, but in truth it's really useful in the paradigm of program to make scientific computation.

Turing model is for program who start with initial parameters, do their loop, and finish with the computed result.

Once you throw irq ( usb event, key/mouse input, network events, hard drive event ) , or threads, it's not a Turing machine anymore. States can change outside of the influence of the program


Well to me ultimately, i still stay in the idea that the only things cpu knows about are registers and memory address.

I still spent some time once uppon a time close to cracking scene, and i'm quite familiar with intel assembler, for me if you don't have security understanding at C level, you ain't got any security at all.

And most high level language will still use C compiler or interpreter, and still run on a CPU who only know about registers and memory address.

And no VM is perfectly safe, as far as i know, most exploit on JAVA comes more from exploiting bugs in the VM rather than the program itself. Or even from the layers underlying the VM, kernel, or libs etc

It's the problem with RUST and why it doesn't really bring that much more security from C to program operating system or kernel, or low level things as they say.

And why to me high level language are only false security, and in the context of event based programming based on plugins/components, the compiler can't check much.

C++ compiler can't really check much memory leak, neither JAVA sdk/VM  know how to prevent dead locks or screw ups, not even talking about exploit via VM or underlying layers down to the kernel.

It just give the impression it does by giving a sort of semantic to express it in programming language, but at the end it still can't really check much that the program will really function as the semantic imply.












This is unbounded nondeterminism.

Most of the program flow is not function calling each others in a precompiled order, but integrating component on top of an event processing layer, and the functions will be called by programs outside of the compiler scope.

It's even more relevant in the context of distributed application, and application server.

Most of the high level function will not be called by the node itself. The parameters with which they will be called will not be determined by the code of the node itself, and that's the only thing the compiler can see.

It look like oracle based Turing machines, but it's not even a problem of deterministic algorithm, because the code is determined by the hardware interupts, or other threads, rather than the algorithm itself.

Even if the algorithm is perfectly predictible in itself, if it's called from an interrupt or use shared state in multi task environment, the execution flow is still not predictible from the code.




That is only fundamentally incompatible with compile-time (i.e. static) typing in the sense of an exponential explosion of types in type signatures.

It doesn't matter the language used, it cannot determine most of the relevant code path at compile time, or check the client side of the interface. It doesn't even really make sense to try to figure that out at compile time.


Yes for me the trade off between explosion of data types and signature, and the benefit it bring in term of high level programming in this context is not in favor of compile time type checking Smiley


I don't think anyone was proposing dependent typing.

Any way @iadix, I must say that I better expend some time on developing my stuff and not talking about all this theory. I was already talking about all this theory for months with @keean. I don't want to repeat it all again now.

Let me go try to do some coding right now today. And let's see how soon I could show something in code you could respond to.

We can then trade ideas on specific coding improvements, instead of this abstract discussion.

I understand what you want, and I essentially want same. We just have perhaps a different idea about the exact form and priorities but let's see how close we are to agreement once I have something concrete in code to discuss.

Yes, i think anyway i got my most thinking across, or the part relevant for the moment and i can get your from the discussion on the github or on this forum Smiley


Most of the code defined will not be called from inside of the component, but by other programs/applications, and you know nothing about them, how they might represent data or objects internally, or how they might do function call passing those object and data. The compiler can't have a clue about anything of it. Neither about the high level code path, or global application logic.

It doesn't have to, and shouldn't. The application do not follow the Turing model anyway, and the high level function of C/C++ compiler are at best gadget, at worst it can force to complexify the code or add level of abstraction where it doesn't really matter, because it still can't check what is relevant in the application execution flow. It can just check the code flow of the abstraction layer, which is mostly glue code and doesn't matter that much in the big picture.

I think you may not be fully versed on typeclasses. The are not OOP classes. @keean and I had long discussions about how these apply to genericity and callbacks.

Please allow me  some time to try to show something. This talk about abstracts or your framework, will just slow me down.


Please two or three sentence replies only. We really need to be more concise right now.

I have read those discussion, i need a good deeper look into it, to me it seem similar in certain aspect to the purpose of my system of dynamic object tree, but not exactly in the same context Smiley

But i'm pretty sure this system of typeclass could be used as input to create those object tree in the script language.

Would not be too hard to implement the low level aspect of memory/instance/reference and concurrent access on the instances with the framework, large part of the low level function being already made to have a runtime to manipulate data and objects created out of this definitiion.

But it's not exactly done with same purpose in mind or coming from same place or to solve the same things exactly =)




IadixDev
Full Member
***
Offline Offline

Activity: 322
Merit: 151


They're tactical


View Profile WWW
April 07, 2017, 01:15:44 PM
 #732

The only point is see where my approach is different is with the GC vs ref count thing, but i think the two are orthogonal.

I want ref counting to avoid pointer ownsership and it's mostly used for short lived object, or permanent objects, i don't really make intermediate.

And you want GC to have macro management of memory with determined life time.



The thing is for me most GC language (like java or js) are incredibly inefficient with memory, and GC sucks for general purpose, and for most simple webapp, the case are not so complex.

The only kind of GC that really make sense to me is in video game engine, or game console SDK, because there is a high level abstraction of objects like levels, bsp tree and some high level abstraction of the objects and the execution flow inside of the engine.

So it allow for good management of objects life time in automatic manner because all execution context and the object it use are well defined from application level.

And mostly the whole GC is hard coded into another sdk, and hard coded into the game program.



Ultimately for me GC only make sense in the context of higher level paradigm to define object life time in relation with explicit life time boundary based on the application execution.


General purpose GC sucks and they just end up not freeing the memory.

Trying using any java or javascript app that is a bit intensive with opengl, multi media or such for long time and it will all end up with a big mess. Unless the user flush the GC manually.

Android cheat a lot on this with fake multi tasking and only the memory for one app has to be really loaded in memory at once, so it's not so bad when you have 40 app and tabs running, only one is in physical memory at one time, so even if the GC sucks, it doesn't matter too much.




But i would be all for defining high level application paradigm when it make sense to have fast GC when object life time boundaries can be determined easily at compile time or when it's possible to determine at a certain point of the program that no reference to a certain number of marked objects will not be used anymore anywhere and they can be safely sweeped.



But other than this, for general purpose it sucks, and for short lived objects it sucks.



As the way i program the blockchain node doesn't use much caching in memory, and deal mostly with short lived objects who will be coming from the network and either stored or sent back to the network, it's not too hard to manage object life time.



And with the reference counter, it still allow for lot of freedom when passing reference to other functions, even for short lived objects, or when life time doesn't matter, or can't be really known at the moment the object is instanciated.



Even if it's not extremely efficient to deal with complex cases, with the lockless internal allocator and memory pool it's still fast to allocate and free memory, and can easily create specific pools for object that have determined life time boundaries.

iamnotback
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
April 08, 2017, 01:54:42 AM
Last edit: April 09, 2017, 06:19:46 AM by iamnotback
 #733

Re: Who else is tired of this shit?

People (like me) who might be thinking that Bitcoin is a platform that can really transform the whole world and can really be a big push for human development can be in for a big disappointment. It is just like any other nice technology or development introduced by man and can also be corrupted along the way because Bitcoin or cryptocurrency has no power to change human nature. We are still bringing the kind of greed and selfishness we have and this is quite true with the raging debate in the Bitcoin right now. While we seem to criticize the world order as it is now and how Bitcoin can change the system, people in the Bitcoin community are sadly manifesting the same sickness and the same traits we want to avoid. This can be getting so ironic but not surprising to me.

The fundamental solution to eliminate greed and power vacuums requires a shift away from fungible money to knowledge trading in Inverse Commons.

I have a plan.


Well if you are so tired of all of it I think you need to take a break and take a moment of silent or stop thinking about it, I think you are pretty stress out of the over and over topic and the nonsense that have been driven all people to be crazy with something about bitcoin well I will not make a rant because all of us are only human that need some rest with this kind of issues.

I mean I am tired of the greed and power vacuums, not that I am tired of only this specific instance them. Those negative attributes of fungible money are very noisy and waste so much of humanity's resources.



Any marketcap other than bitcoin's is a smoke and mirror illusion, any marketcap lower than $10B should not even be considered because some big investors could easily manipulate the entire ecosystem but not with marketcap above $10-$15B as the risks are too high for a few big whales.
So I wouldn't count on alts that much.

iamnotback
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
April 09, 2017, 06:29:05 AM
 #734

@iadix, PLEASE DO NOT REPLY TO THIS. We will talk about this in the future. I have some coding to do first.

This applies to what we've been discussing:

https://github.com/keean/zenscript/issues/11#issuecomment-287266460
iamnotback
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
April 10, 2017, 06:24:59 AM
 #735

It is important for me to clear up the record on the following because I am preparing to blog on a ToE which ties in everything we've been discussing lately.  Shocked

Re: OT crap from Compact Confidential Transactions for Bitcoin

Edit2: Thanks for the move, totally appropriate.

Hitler Gregory had moved it from the original thread where it belonged in context, and he renamed the thread to this adhominen insult name, OT crap from Compact Confidential Transactions for Bitcoin.

What is so ironic is that I think I ended up later potentially solving the proof-of-square requirement (required by the flaw Andrew Poelstra aka andytoshi has discovered) for Compact Confidential Transactions (CCT) when I merged that homomorphic encryption with Cryptonote ring signatures prior to the similar attempt to merge Blockstream's less efficient CT with Cryptonote.

Andrew Poelstra and Gregory Maxwell don't need any defense by me, their records stand on their own, but I'm thinking pointing this out may be helpful to those that aren't familiar with your antics. I'll also point out that most people, especially GMaxwell have been overwhelmingly patient with you.

https://bitcointalk.org/index.php?topic=279249.msg5640949#msg5640949

Lol, you linked to where I had been the first one to point out to Gregory Maxwell, that CoinJoin can always be jammed with DoS because one can't blacklist the attacker because the entire point of CoinJoin is to provide mixing so that an attacker can obscure his UTXO history.

You are so careless that you didn't even realize that was my famous shaming of Gregory. Did you miss the post where I declared "checkmate" then Gregory responded with ad hominem and then by the force of my correct logic he had to STFU.



Lol, again you missed where at the end I showed the math derivation of how to defeat selfish-mining which was the basic idea behind published designs such as GHOST (which I wasn't aware at the time and only became aware of when I read Vitalik's blog).

You linked to a guy who is technologically ignorant and is currently a BU shill.



Yes Gregory did point an error in my conceptualization of Winternitz which I had only become aware of just hours or days before that, and I admitted it. I even went on to write Winternitz code and become quite expert on it, even incorporating Winternitz it into my anti-DDoS conceptualization.

But you failed to cite the other occasions where I put Gregory's foot in his mouth, such as my recent expose on how Bitmain checkmated Blockstream and in 2016 I pointed out that his flawed logic and math on why Ogg shouldn't have index (which was a format in which he was intimately involved as a co-designer of one of the key compression codes!):


And how is not having the index any worse than not allowing an index. I fail to see the logic. Seems you are arguing that the receiving end will expect indexes and not be prepared for the case where indexes are not present. But that is a bug in the receiving end's software then. And in that case, there is no assurance that software would have done the index-less seeking more efficiently for the status quo of not allowing an index. None of this makes sense to me.

Also I don't understand how you calculate 20% increase in file size for adding an index. For example, lets take an average 180 second song consuming roughly 5MB for VBR encoding. Let's assume my users are satisfied with seeking in 1 second increments, so that means means I need at most 180 of 22-bit indices, so that is only 495 bytes which is only a 0.01% increase! On top of that I could even compress those 22-bit indices into relative offsets if I want to shrink it by roughly 75% to 0.0025%.

Ah that reminds me why @stereotype keeps trolling my threads, again, and again and continuing to be habitually incorrect.
stereotype
Legendary
*
Offline Offline

Activity: 1554
Merit: 1000



View Profile
April 10, 2017, 08:35:48 AM
 #736

It is important for me to clear up the record on the following because I am preparing to blog on a ToE which ties in everything we've been discussing lately.  Shocked

Re: OT crap from Compact Confidential Transactions for Bitcoin

Edit2: Thanks for the move, totally appropriate.

Hitler Gregory had moved it from the original thread where it belonged in context, and he renamed the thread to this adhominen insult name, OT crap from Compact Confidential Transactions for Bitcoin.

What is so ironic is that I think I ended up later potentially solving the proof-of-square requirement (required by the flaw Andrew Poelstra aka andytoshi has discovered) for Compact Confidential Transactions (CCT) when I merged that homomorphic encryption with Cryptonote ring signatures prior to the similar attempt to merge Blockstream's less efficient CT with Cryptonote.

Andrew Poelstra and Gregory Maxwell don't need any defense by me, their records stand on their own, but I'm thinking pointing this out may be helpful to those that aren't familiar with your antics. I'll also point out that most people, especially GMaxwell have been overwhelmingly patient with you.

https://bitcointalk.org/index.php?topic=279249.msg5640949#msg5640949

Lol, you linked to where I had been the first one to point out to Gregory Maxwell, that CoinJoin can always be jammed with DoS because one can't blacklist the attacker because the entire point of CoinJoin is to provide mixing so that an attacker can obscure his UTXO history.

You are so careless that you didn't even realize that was my famous shaming of Gregory. Did you miss the post where I declared "checkmate" then Gregory responded with ad hominem and then by the force of my correct logic he had to STFU.



Lol, again you missed where at the end I showed the math derivation of how to defeat selfish-mining which was the basic idea behind published designs such as GHOST (which I wasn't aware at the time and only became aware of when I read Vitalik's blog).

You linked to a guy who is technologically ignorant and is currently a BU shill.



Yes Gregory did point an error in my conceptualization of Winternitz which I had only become aware of just hours or days before that, and I admitted it. I even went on to write Winternitz code and become quite expert on it, even incorporating Winternitz it into my anti-DDoS conceptualization.

But you failed to cite the other occasions where I put Gregory's foot in his mouth, such as my recent expose on how Bitmain checkmated Blockstream and in 2016 I pointed out that his flawed logic and math on why Ogg shouldn't have index (which was a format in which he was intimately involved as a co-designer of one of the key compression codes!):


And how is not having the index any worse than not allowing an index. I fail to see the logic. Seems you are arguing that the receiving end will expect indexes and not be prepared for the case where indexes are not present. But that is a bug in the receiving end's software then. And in that case, there is no assurance that software would have done the index-less seeking more efficiently for the status quo of not allowing an index. None of this makes sense to me.

Also I don't understand how you calculate 20% increase in file size for adding an index. For example, lets take an average 180 second song consuming roughly 5MB for VBR encoding. Let's assume my users are satisfied with seeking in 1 second increments, so that means means I need at most 180 of 22-bit indices, so that is only 495 bytes which is only a 0.01% increase! On top of that I could even compress those 22-bit indices into relative offsets if I want to shrink it by roughly 75% to 0.0025%.

Ah that reminds me why @stereotype keeps trolling my threads, again, and again and continuing to be habitually incorrect.

You are without doubt, a professional psychologist's, wet-dream.
iamnotback
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
April 10, 2017, 10:50:59 AM
 #737

You are without doubt, a professional psychologist's, wet-dream.

And again more evidence of my expertise.
stereotype
Legendary
*
Offline Offline

Activity: 1554
Merit: 1000



View Profile
April 10, 2017, 02:56:53 PM
 #738

You are without doubt, a professional psychologist's, wet-dream.

And again more evidence of my expertise.
What is it you would like people to think, when they read how absolutely fabulous you are, Shelby?
iamnotback
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
April 11, 2017, 09:29:26 AM
Last edit: April 11, 2017, 01:17:08 PM by iamnotback
 #739

Why Jorge Stolfi lied to the SEC that commodity money is an equity? Bitcoin ETF.




Anyone who doubts my expertise should read this exchange with @dinofelis:

https://bitcointalk.org/index.php?topic=1767014.msg18543926#msg18543926
https://bitcointalk.org/index.php?topic=1767014.msg18541315#msg18541315
https://bitcointalk.org/index.php?topic=1767014.msg18532076#msg18532076
https://bitcointalk.org/index.php?topic=1767014.msg18530076#msg18530076

My conceptualization of commodities and taxonomy should also provide some insight into my reasoning skills:

https://bitcointalk.org/index.php?topic=1864869.msg18545359#msg18545359
iamnotback
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
April 11, 2017, 02:47:50 PM
Last edit: April 11, 2017, 02:59:36 PM by iamnotback
 #740

Interesting discussion going on over there... (click the quote if you're interested to go read the context)

Also BTC will not be used by billionaires only, that's stupid, people will demand changes so bitcoin cannot be used only by a handful of people on earth. If it takes UASF then UASF will happen so segwit + LN can happen and everyone can use bitcoin, additional blocksize increases will come too. Nobody will support the "billionares only blockchain", that's stupid.

Sorry you can't do anything to stop it:


Cry and scream all you want. You are wasting your time fighting what is inevitable.

Soon you will realize this. Go ahead and try. My popcorn is laughing.

It's as easy as UASF + PoW change with a new solution such as randomly changing algorithm to avoid efficient ASIC stacking.

"BillionaireChain" used by 2,000 people on earth will be seen as a joke by the rest of the population and it will no longer be Bitcoin. Progress will move on.

None of your democracy shenanigans will prosper. But feel free to lose all your wealth trying.

The opinion of the masses does not matter, if we presume that fungible money will remain supreme in the economy.

I have one alternative to offer which is the theory that the economy will bifurcate into fungible money driven tangible economy and a knowledge age economy in Inverse Commons. The latter is what my BitNet project is about. If I am correct, then that will be our only alternative.

But don't believe me. Please go waste your time and lose all your wealth. The smart money is starting to recognize my expertise. Please do your own due diligence.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 [37] 38 39 40 41 42 43 44 45 46 47 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!