Bitcoin Forum
June 23, 2024, 09:42:02 PM *
News: Voting for pizza day contest
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 [40] 41 42 43 44 45 46 47 48 49 50 »
781  Alternate cryptocurrencies / Altcoin Discussion / Re: Do you think "iamnotback" really has the" Bitcoin killer"? on: April 07, 2017, 12:24:00 PM
I hope you don't disappear. And I hope I can show you something in code asap that makes you interested.

And maybe you will understand why for me compile time check is mostly irrelevant.

I shouldn't disappear normally, but never know these days  Shocked

Oki for code example Smiley

Rather I would say compile-time checks are important especially for small details, but we can't possibly type every semantic due to unbounded semantics.

Because every programmer will consider Turing law as some kind of holy bible, but in truth it's really useful in the paradigm of program to make scientific computation.

Turing model is for program who start with initial parameters, do their loop, and finish with the computed result.

Once you throw irq ( usb event, key/mouse input, network events, hard drive event ) , or threads, it's not a Turing machine anymore. States can change outside of the influence of the program


Well to me ultimately, i still stay in the idea that the only things cpu knows about are registers and memory address.

I still spent some time once uppon a time close to cracking scene, and i'm quite familiar with intel assembler, for me if you don't have security understanding at C level, you ain't got any security at all.

And most high level language will still use C compiler or interpreter, and still run on a CPU who only know about registers and memory address.

And no VM is perfectly safe, as far as i know, most exploit on JAVA comes more from exploiting bugs in the VM rather than the program itself. Or even from the layers underlying the VM, kernel, or libs etc

It's the problem with RUST and why it doesn't really bring that much more security from C to program operating system or kernel, or low level things as they say.

And why to me high level language are only false security, and in the context of event based programming based on plugins/components, the compiler can't check much.

C++ compiler can't really check much memory leak, neither JAVA sdk/VM  know how to prevent dead locks or screw ups, not even talking about exploit via VM or underlying layers down to the kernel.

It just give the impression it does by giving a sort of semantic to express it in programming language, but at the end it still can't really check much that the program will really function as the semantic imply.












This is unbounded nondeterminism.

Most of the program flow is not function calling each others in a precompiled order, but integrating component on top of an event processing layer, and the functions will be called by programs outside of the compiler scope.

It's even more relevant in the context of distributed application, and application server.

Most of the high level function will not be called by the node itself. The parameters with which they will be called will not be determined by the code of the node itself, and that's the only thing the compiler can see.

It look like oracle based Turing machines, but it's not even a problem of deterministic algorithm, because the code is determined by the hardware interupts, or other threads, rather than the algorithm itself.

Even if the algorithm is perfectly predictible in itself, if it's called from an interrupt or use shared state in multi task environment, the execution flow is still not predictible from the code.




That is only fundamentally incompatible with compile-time (i.e. static) typing in the sense of an exponential explosion of types in type signatures.

It doesn't matter the language used, it cannot determine most of the relevant code path at compile time, or check the client side of the interface. It doesn't even really make sense to try to figure that out at compile time.


Yes for me the trade off between explosion of data types and signature, and the benefit it bring in term of high level programming in this context is not in favor of compile time type checking Smiley


I don't think anyone was proposing dependent typing.

Any way @iadix, I must say that I better expend some time on developing my stuff and not talking about all this theory. I was already talking about all this theory for months with @keean. I don't want to repeat it all again now.

Let me go try to do some coding right now today. And let's see how soon I could show something in code you could respond to.

We can then trade ideas on specific coding improvements, instead of this abstract discussion.

I understand what you want, and I essentially want same. We just have perhaps a different idea about the exact form and priorities but let's see how close we are to agreement once I have something concrete in code to discuss.

Yes, i think anyway i got my most thinking across, or the part relevant for the moment and i can get your from the discussion on the github or on this forum Smiley


Most of the code defined will not be called from inside of the component, but by other programs/applications, and you know nothing about them, how they might represent data or objects internally, or how they might do function call passing those object and data. The compiler can't have a clue about anything of it. Neither about the high level code path, or global application logic.

It doesn't have to, and shouldn't. The application do not follow the Turing model anyway, and the high level function of C/C++ compiler are at best gadget, at worst it can force to complexify the code or add level of abstraction where it doesn't really matter, because it still can't check what is relevant in the application execution flow. It can just check the code flow of the abstraction layer, which is mostly glue code and doesn't matter that much in the big picture.

I think you may not be fully versed on typeclasses. The are not OOP classes. @keean and I had long discussions about how these apply to genericity and callbacks.

Please allow me  some time to try to show something. This talk about abstracts or your framework, will just slow me down.


Please two or three sentence replies only. We really need to be more concise right now.

I have read those discussion, i need a good deeper look into it, to me it seem similar in certain aspect to the purpose of my system of dynamic object tree, but not exactly in the same context Smiley

But i'm pretty sure this system of typeclass could be used as input to create those object tree in the script language.

Would not be too hard to implement the low level aspect of memory/instance/reference and concurrent access on the instances with the framework, large part of the low level function being already made to have a runtime to manipulate data and objects created out of this definitiion.

But it's not exactly done with same purpose in mind or coming from same place or to solve the same things exactly =)



782  Alternate cryptocurrencies / Altcoin Discussion / Re: Do you think "iamnotback" really has the" Bitcoin killer"? on: April 07, 2017, 11:08:10 AM
Oki Smiley Believe or not i'm already trying to be concise  Grin Grin


Well just a last thing i wanted to mention with struct, but i know you already know most of this stuff, but it's just because you asked the reason why it's better than structures etc Smiley

But if i have to really explain all my hollistic reasoning, it would take a whole book, because it's system that is originally bootable and can run on bare metal, so there are many aspect i designed coming from ActiveX and web browser, and problematics of video streaming, interfacing, data format, p2p web 2.0, javascript, DOM etc Smiley



But the really first motivation for doing the tree thing instead of C structure, is not to have to bother about pointer ownsership and data translation.


Consider this

Code:

typedef struct image
{
   unsigned char *data;
  int width;
  int height;

}

typedef struct event
{
    event type;
    void *data;

}

image *global_list[16];


function handle_image(image *myimage)
{
  if(myimage->with<256)
     global_list[n]=myimage;
}


function event_handler(event *myevent)
{
image *myimage;


if(myevent->type==image_event)
 handle_image((image *)myevent->image);

}





Well simple case, but the question is do you need to free the event data after the call to the event_handler or not.

Without reference counter you can't know.

If you free it, you might end with an unallocated pointer in the global_image_list, or worst, a pointer to something else, without having anyway to detect this.

If you don't free it and it's not copied, you end up with a memory leak ( allocated memory without any valid pointer to it in the program data).  

Same if you need to reallocate the pointer.


The main motivation is this, so you can pass generic reference between different modules and functions, they can copy a reference of it, with lockless algorithm for reference counting so references can be shared between different threads.

Originally i just developed the json parser to be able to construct complex object hierarchy from one line and have a more 'atomic feeling' in the object construction who succeed or fail as a whole. And to have a sense of internal 'static object typing'.

And it's a very critical feature for my paradigm, because i want the design pattern to be based on asynchronous event rather than on direct function/method calls, and there is no explicit code path the compiler can easily check.

This is a must to have a single thread processing of events emitted by asynchronous sources, and to have the 'green threaded' feeling on high level, even with C, and for most simple case the green thread can keep locklessly synchronized with other heavy threads or interrupts.





And the other motivation is with interfacing and data serialization / jsonification , binary compatibility etc.


If you start from C struct, you need 4 functions around each structure you want to serialize or share, 1 to serialize, 1 to deserialize, 1 to jsoninify, 1 to dejonisfy.

With code spread in many different part, away from the 'hot code' where the real work on the data take place.

With this system, you can just add a member to the structure in the 'hot code', in the code that actually produce the data, and then it will be automatically serialized or jsonified with the new member.  Without you have a single line of code to change anywhere else. Even if that involve some boilerblate in the 'hot code'.

Even if this policy with rpc mean that caller define data format of the input, and callee the data format of the output, even if they can be in different language. But it seem more logical and more efficient from a programmer perspective.



But after well it's still C code, so you can use all the dirty C stuff, binary packed structures, cheap cast, inline assembler, stack manipulation, and screw up everything and end with 0day exploits every day.

Or you can use the boilerplate and having no buffer overflow, automatic conversion to/from json and binary, lockless reference counting, and multi thread safety on manipulation of array and lists.



Most of the time, cpu cache and instruction pipelining will do a much better job at dealing with problem either it's concurrent access, or caching, with instruction re odering, most modern cpu have complex instruction pipeline who can do the job at runtime much better than any compiler, and the last generation cpu and motherboard / north bridge/ south bridge etc have good handling of SMP with data shared between core, caches, and north-south bridges, with atomic operations etc.


My dream would have to get to green thread in C from bare metal interupt Smiley And i'm not too far actually lol


I hope it's not too long Smiley


I'm trying to be succint, i think it's the last very long thing i need to explain for the moment anyway Smiley


And i know you already know most of the stuff and you already discussed them for month with sean Smiley

But its to explain my side Cheesy


I started to code on the script parser too, i'll try to get something working in some days Smiley
783  Bitcoin / Bitcoin Discussion / Re: Who else is tired of this shit? on: April 07, 2017, 10:22:01 AM
Hello Smiley

Well not to sound harsh but ..

Yeah I think it's obvious by now what the world of crypto is coming at.

It's not going to help third world country to fight corruption in the financial world, or help micro economy.  It's just another toy millionere use to grab more millions from more people with gray area financial tactic in this unregulated world.

Crypto today are probably the most corrupt financial network. It's all about who you know rather than what you know. All about loyalty, not integrity.

If you know anything about real economics, you can say today 90% of it's value is a speculative bubble.

Can always pretend there is an hidden hand following market logic, but it's not the case, neither it's about investment in innovation, or for structueal development, it's all about clique making shady move for their own personal profits.

You can always try to understand the stake holders and mining farms and shakers and movers to put your stash on the right coin at the right moment, but it looks like italian football bet, french underground boxing bet.

All rigged up and you cant win big if you dont know who has stash involved. And it's not about knowing sport performance anymore. No more than it's about if coin is better than another.

For most people bitcoin is just a "get richer quick on the internet" scheme.

No need to beat around the bush for ever.

And im not even saying this to imply then what you do about it other than being angry about it on the Internet, or Go make your own Smiley

But need to face what crypto are today. It's not about technical innivation , or fixing financial world, developping micro economy, or any of this.
784  Alternate cryptocurrencies / Altcoin Discussion / Re: Do you think "iamnotback" really has the" Bitcoin killer"? on: April 06, 2017, 09:11:13 AM
The core holistic concept i wanted to get at originally was to get to "loppless programs". And maybe you will understand why for me compile time check is mostly irrelevant.


Because every programmer will consider Turing law as some kind of holy bible, but in truth it's really useful in the paradigm of program to make scientific computation.

Turing model is for program who start with initial parameters, do their loop, and finish with the computed result.

Once you throw irq ( usb event, key/mouse input, network events, hard drive event ) , or threads, it's not a Turing machine anymore. States can change outside of the influence of the program

And most application today they are not follow this model at all.

Most application, either they are server, or UI, have mostly empty main loop, but is programmed in way to have handlers for certain events, and mostly for UI app the main loop is mostly pumping and dispatching events from different sources. In wepapp, there is no 'main loop' ( it's in the browser or js engine).

In any case, the whole code path is never determined at compile time.

There is no predefined code path that the compiler can follow.

All the program is made as a collection of class and module to handle different kind of events, either they come from the hardware or another thread.

Most of the program flow is not function calling each others in a precompiled order, but integrating component on top of an event processing layer, and the functions will be called by programs outside of the compiler scope.

It's even more relevant in the context of distributed application, and application server.

Most of the high level function will not be called by the node itself. The parameters with which they will be called will not be determined by the code of the node itself, and that's the only thing the compiler can see.

In this event programming model, it's really more about high level component definition, which can be manipulated by generic low level function to match certain event or data type, rather than having the whole code flow and function call hierarchy being determined at compile time.

In this context, the amount of thing for which the compiler can be useful is very low. And this would be true for any language used, either it's java with tomcat, or c++ to program server/UI, or php.

It doesn't matter the language used, it cannot determine most of the relevant code path at compile time, or check the client side of the interface. It doesn't even really make sense to try to figure that out at compile time.

Most of the code is event handler definition, but there is no full OO code path or call hierarchy.


Blockchain node fit in the case too, they don't have much big main loop, they mostly react to network event, either from the P2P network, or the rpc interface. The qt wallet UI is same, no big main loop, only event handler.



And my goal is to get to this kind of programming paradigm, where application are not programmed as a single block where all code path can be determined at compile time, but as collection of components implementing certain interface or event handlers, and then having low level scheduler to format and dispatch those event handling in different threads or module or nodes.

The scheduler doesn't have to know anything specific about the high level type of any module, and modules don't have to know anything about lower layer or other modules it doesn't use.

The compiler can't check anything, because it use generic type to represent the object, but it doesn't matter because it wouldn't be able to determine the call hierarchy or code path, because there is no call hierarchy in the program being compiled.

It just dispatch generic event, to generic components/module to handle them.

In this paradigm, programming an application is mostly like java applet or ActiveX , it doesn't contain a main loop, and it's more about programming routine to handle events, or processing some data, rather than based on predetermined code path.

Most of the code defined will not be called from inside of the component, but by other programs/applications, and you know nothing about them, how they might represent data or objects internally, or how they might do function call passing those object and data. The compiler can't have a clue about anything of it. Neither about the high level code path, or global application logic.

It doesn't have to, and shouldn't. The application do not follow the Turing model anyway, and the high level function of C/C++ compiler are at best gadget, at worst it can force to complexify the code or add level of abstraction where it doesn't really matter, because it still can't check what is relevant in the application execution flow. It can just check the code flow of the abstraction layer, which is mostly glue code and doesn't matter that much in the big picture.

785  Alternate cryptocurrencies / Altcoin Discussion / Re: Do you think "iamnotback" really has the" Bitcoin killer"? on: April 05, 2017, 02:35:23 PM
But I think you really over estimate the relevance of compile time type checking in the context of cross language interface and distributed application.

Even in the best case, with 100% homogenous language for all app , modules and all layer down to kernel level,  the compiler can only check his side of the application.

Nothing say all part will be compiled with the same compiler, or using the same language at all.

And it seem impossible to have all the part from binary data protocol and crypto, cross language interface definition, to high level OO programing, for distributed application in a single high level language.

For me it's impossible.

Need to make compromise somewhere.

The compromise is the glue code between low & high level is full of boilerplate, but I dont see better way to do this.

It remain still useable on both end from low level/kernel/binary data, to high level OO, network protocol, rpc, and the data representation remain consistent in all part.

It should be easy to program simple OO/GC like languages, or having high level definition of binary network protocol, with a synthax similar to js, or almost compatible with js engine, but with other built object available through module api/interface call.

The module abstraction replace the javascript DOM objects definition, and is a much simpler interface.

Or its possible to simulate the interface of the http object from the browser to have more browser-js compatible code for distributed request.
786  Alternate cryptocurrencies / Altcoin Discussion / Re: Do you think "iamnotback" really has the" Bitcoin killer"? on: April 05, 2017, 01:47:43 PM
From your discussion on the git

https://gist.github.com/shelby3/19ecb56e16f159096d9c178a4b4cd8fd

Quote
One example is that C can't stop you from accidentally reading from unintialized memory and writing+reading past the end of allocated memory block, thus opening your code to innumerable security holes. You'll never know if you've caught all these bugs which become Zeroday exploits, because all your code is written with grains of sand detail. Rather low-level code should only be used for the 5 - 20% of the code that needs to be finely tuned by an expert. High-level programming languages result in much faster programs most of the time in the real world scenario (not benchmarks contests). High-level programs are less verbose, provide more consistency of structure and performance, and enable less opaque expression of semantics (i.e. the program's logic and purpose). C obscures the actual sematics in loads of unnecessary low-level details as such as manually managing pointers, inability to express and abstract over high-level data structures such as collections and functional composition.

Quote
Not fault, but rather reading the wrong data injected by the hacker.

Hacker is injecting some data into memory that you read by accident; because he observes one of your arrays sometimes precedes a certain string that you store in memory. For example, you receive some input string from the Internet, store that on the heap, then with certain random probability that string ends up being read from due to a bug in your code that reads from an array on the heap. In this way, the hacker has injected some data where you did not expect it. This is one way how Zeroday exploits are made.


It's why it's better to use the object tree structure than C structure directly.

And C structure can be dicey with cross compiling and binary network protocol.

Creating json string from object tree structure is not so trivial with C either. (stdio, vargs , strcpy, strtod etc). Even with c++ and boost spirit, it's far from being extremely limpid either.

With the object tree system, it avoid bad memory access like this, among other things.

Even when the caller is blind to the data that the function use. ( for the c compiler it's a reference pointer)


But again nothing prevent to declare and use C structure, but i avoid them as much as possible with the API.

The only exception in the non kernel code i think is the string structure, but it has 3 members and is very generic.

Otherwise i avoid to use binary C structure as argument in function call that need to be potentially cross compiler/network safe.

Strings are packed internally with the bitcore variable string format for easy serialization to the binary p2p protocol.

Like this

say bye to zeroday exploit, bad memory access, etc etc etc.

say hi to cross compiler binary compatibility, cross language interfaces, safe and secure distributed application etc etc.


I'm sure you see where i want to get at Cheesy Cheesy
787  Alternate cryptocurrencies / Altcoin Discussion / Re: Do you think "iamnotback" really has the" Bitcoin killer"? on: April 05, 2017, 12:13:07 PM
The C compiler can only understand the "leaf" of the object, the var who can be represented by native C type. ( int, float, strings, pointers, array etc), for the object tree it doesnt understand the type, to the C compilers, all objects are a reference pointer.

There is static typing internally for objects with built in types.

Or can use "object template" close to type class to instanciate an object based on compile time structure, using those built in type as static type definition in a json like format.

The C compiler wont understand anything beyond its basic native types, in the leaf of the objects.

Objects can have static type associated to their reference , and a name.

It's the interest over C structures lists. Members can be referenced by name and type based on runtime input. Close to functional programing.

And structures can be manipulated by same functions either they are un serialized from binary data, or constructed from a json objects, or constructed manually in C based on C compiler binary type.

But different compilers can have different binary packing for struct members.

In the absolute nothing prevent to use C struct, but need specfific glue code on both side to unserialize them ( compiler can have weird binary packing of structure ) and turn them to json.

With this you have instanciation based on type class like format, serialization to binary data and conversion to text json automatically, with reference counter, arrays, lists, collections, thread safety, dynamic linking at binary level, interface implemented in cross compiler/system executable binary, without glue code specific to each structure.

Static object types can be defined based on either built in structure/objects, or native C compiler types, and arrays/lists of those.

788  Alternate cryptocurrencies / Altcoin Discussion / Re: Do you think "iamnotback" really has the" Bitcoin killer"? on: April 05, 2017, 12:01:12 PM
The tree can be serialized to binary format and text json. Both.

The node has both p2p protocol in binary data, and rpc interface in json. Working on the same objects.

Okay so you support APIs either binary compatibility or JSON.

The type definition will always escape the C compiler comprehension, but can use typedef alias on the reference pointer to help.

Why can't you translate these JSON objects to binary data structures in C? Which data structures can't a C struct model?

It would be much more elegant and typed to access fields of data structures with the normal dot notation instead of the boilerplate of library calls.

The articulation in higher level wont be made in C. It can be distributed in pure binary form for linux and windows, module with new rpc interface can be added to the node without recompiling.

The relevant part of the api for app programmers is not there, but in the javascript code. With the rpc/json api.

Articulation is a strange word to use there. I think what you mean to say that is that the high-level coding will be done in JavaScript and calls can be made to C modules which can be loaded at run-time.


There can be a way to pack binary structure in a single variable in binary form. I do this for der signature.

When there is complex hierarchy, and the types need to be kept across layers that transport it ( eg event loops to deal with event with different members type), it can be useful to have meta type objects like this.

It can transport safely complex data structure with a generic reference pointer type. And the object can be both serialized to binary data and hashed, or to textual json for rpc/json & js api.


Yes basically it's the idea to have binary modules plugin to implement rpc interface, the rpc method is the exported symbol name, the class is the module name, and params are passed with the tree structure. ( could add typedef alias for basic C compiler check).
789  Alternate cryptocurrencies / Altcoin Discussion / Re: Do you think "iamnotback" really has the" Bitcoin killer"? on: April 05, 2017, 11:37:05 AM
And there is the "binary form", a tree of referenced objects, constructed from a text json, or manually with the API. ( or eventually from a script)

The idea is to allow to completely abstract function calling and parameters passing using a format that can be converted to/from json transparently, automatically, with object reference counting, and stored internally as binary data aligned on 128 bits for fast SIMD operations.

So you want to serialize binary data structures to JSON text and deserialize then back to binary data structures? Why?

The same function prototype in C can be used to declare different functions with different input parameters.

Like a generic function type who can be implemented with different type of data format for arguments, and to define the json/rpc API from the C side.

You mean you are supplementing C with a meta type system residing in the run-time parsing of these JSON tree structures? But then your meta type system is not statically checked at compile-time.

The p2p protocol use binary data format, opengl too, crypto too.

I thought you wrote you are serializing these to JSON text? Why are you now saying you are transmitting them in binary format?

Your communication is very difficult for me to understand.

Yes it cant be checked at compile time, but

Why the "but"? Can it or can't it?

there is way to have a static definition of type too like with DOM objects style, with a default structure template  associated with the meta type, and can make sure all object of this meta type have a certain forced structure on instanciation, even at binary level. ( so they can be serialized/hashed from json or binary data definition).

What do these words mean?

Do you understand that is very difficult to get a huge number of programmers to adopt some strange framework written for one person's preferences. Generally changes in programming have to follow sort of what is popular and understood.

If you have a superior coding paradigm, then it should be something that can be articulated fairly simply and programmers will get it easily and say "a ha! that is nice!".

Something that is very convoluted to explain, is probably not going to be popular.

The tree can be serialized to binary format and text json. Both.

The node has both p2p protocol in binary data, and rpc interface in json. Working on the same objects.

The type definition will always escape the C compiler comprehension, but can use typedef alias on the reference pointer to help.

The articulation in higher level wont be made in C. It can be distributed in pure binary form for linux and windows, module with new rpc interface can be added to the node without recompiling. Nobody has to see any bit of those source to develop à js app with it.

And if it's to be used at low level, ill document the low level API, to use it to make C or C++ apps. But it's not the point for the moment.


The relevant part of the api for app programmers is not there, but in the javascript code. With the rpc/json api.

Only block chain protocol implementation, or the host side of interface for application modules has to be done in C with the internal api.

Js app developers or node hoster can just get the exe and modules and make their app from the rpc/api.

It could be in assembler or Lisp it wouldnt change a thing.

But I can document the internal api and interface, but it's already all in the code source, and there are examples.

To program high level scripting with it need to know the high level API with the tree.



If there was already a well adopted solution to do this that make all app developers happy with a safe, secure, efficient distributed application framework in high level I would say you ok, but there isnt ... next what ...

790  Alternate cryptocurrencies / Altcoin Discussion / Re: Do you think "iamnotback" really has the" Bitcoin killer"? on: April 05, 2017, 11:30:36 AM
I asked you to please:

For example, I have no idea why you are serializing to JSON textual format in your C code and not just passing (little or big endian) binary data that is compatible with C structs. And please don't write another long explanation that I can't really understand your descriptions well. You try to say too many things at the same time, and it is ends up not being that comprehensible.

Yet you still wrote another wall of text. Can't you make your point more concisely? Shouldn't you think more carefully about what you want to write?

In one sentence , the idea is to have a representation of a hierachy of objects and lists ( keyname-ref ) ( collection of collection of collection of collection) manipulated in C, and used as function arguments, to facilitate cross compiler compatbility, and memory leaks detection, and allowing to represent simple high level operator on object and variables from C, and being convertible to/from textual json with a generic functions.

Okay this is slightly better communication. Now you are talking to me in high-level concepts that can be digested and understood.

1. I don't need cross-compiler compatibility if I am using Java or JavaScript that runs every where. Performance and up-time hardening are not my first priorities. That will come later. I am one guy trying to get to testnet, not trying to write the perfect C++ implementation on the first draft of the code.

2. I don't need memory-leak detection (i.e. refcounting) if I have GC from Java, JavaScript, or Go.

3. Emulating high-level data structures in C with a library, means the static typing of those data structures is lost. I remember you wrote that you didn't want to use C++ because it is a mess. So apparently you decided to forsake static typing.

4. I would prefer to have a language which can statically type the data structures and which doesn't require the boilerplate of library calls for interfacing with higher-level data structures.

In other words, I see you have made compromises because of priorities which you think are more important. And what are those very important priorities? Performance?

1. Up time should already be good Smiley but Yes you can write code in C using this data format as function arguments and call it from js or java, even remotely via http/json

2. Yes normally, but need to check the certain case sent in pm, but for most things Yes.

3. Static typing can be emulated at the meta typing level at run-time, but hardly by the C compiler, but maybe some tricks with compile time check could be made with macro or pragma.

4. There is some form of static typing internally but it's not visible on the C level. It could be seen by higher level script supporting static typing.


Performance not in short term.



Initially the real motivation is operating system project based on micro kernel, with system agnostic binary modules who can be compiled from windows or linux, and can abstract most need for complex memory allocation and tree of objects at drivers level.

So it can be booted directly from a pi , or pc, or in virtual box from bare metal. To have also rpc and distributed modules in mind. For doing efficient server side operation in C, for 3d , or data processing, with distributed application programmed from this.

Like an application server. With integrated crypto, vectorial math, data list, and somehow like small tomcat for embedded system. Oriented with json and webapp.


The goal originally is this, except I integrated modules to deal with the blockchain protocol and implemented the low level functions with win32/Linux kernel api to make blockchain nodes with rpc server.
791  Alternate cryptocurrencies / Altcoin Discussion / Re: Do you think "iamnotback" really has the" Bitcoin killer"? on: April 05, 2017, 11:02:50 AM
Well idk what format you want to use to define the api then ? ( to start somewhere)

Yes,  otherwise Yes see you in 6 month when you have code and an api to show.
792  Alternate cryptocurrencies / Altcoin Discussion / Re: Do you think "iamnotback" really has the" Bitcoin killer"? on: April 05, 2017, 10:53:01 AM
And there is the "binary form", a tree of referenced objects, constructed from a text json, or manually with the API. ( or eventually from a script)

The idea is to allow to completely abstract function calling and parameters passing using a format that can be converted to/from json transparently, automatically, with object reference counting, and stored internally as binary data aligned on 128 bits for fast SIMD operations.

So you want to serialize binary data structures to JSON text and deserialize then back to binary data structures? Why?

The same function prototype in C can be used to declare different functions with different input parameters.

Like a generic function type who can be implemented with different type of data format for arguments, and to define the json/rpc API from the C side.

You mean you are supplementing C with a meta type system residing in the run-time parsing of these JSON tree structures? But then your meta type system is not statically checked at compile-time.

The p2p protocol use binary data format, opengl too, crypto too.

Yes it cant be checked at compile time, but there is way to have a static definition of type too like with DOM objects style, with a default structure template  associated with the meta type, and can make sure all object of this meta type have a certain forced structure on instanciation, even at binary level. ( so they can be serialized/hashed from json or binary data definition).
793  Alternate cryptocurrencies / Altcoin Discussion / Re: Do you think "iamnotback" really has the" Bitcoin killer"? on: April 05, 2017, 10:48:47 AM
I asked you to please:

For example, I have no idea why you are serializing to JSON textual format in your C code and not just passing (little or big endian) binary data that is compatible with C structs. And please don't write another long explanation that I can't really understand your descriptions well. You try to say too many things at the same time, and it is ends up not being that comprehensible.

Yet you still wrote another wall of text. Can't you make your point more concisely? Shouldn't you think more carefully about what you want to write?


In one sentence , the idea is to have a representation of a hierachy of objects and lists ( keyname-ref ) ( collection of collection of collection of collection) manipulated in C, and used as function arguments, to facilitate cross compiler compatbility, and memory leaks detection, and allowing to represent simple high level operator on object and variables from C, and being convertible to/from textual json with a generic functions.
794  Alternate cryptocurrencies / Altcoin Discussion / Re: Do you think "iamnotback" really has the" Bitcoin killer"? on: April 05, 2017, 10:36:05 AM
( the rest is to answer other points.)


I already have a full API, and i did not make this framework for blockchain specially, even if for application yes i have blockchain ecosystem in mind specifically, but the frame work can handle all kind of things ( ray tracing, manipulation of graphic object, manipulation of blockchain objects, private key, signature, in browser staking etc.

All the API is documented in the white papper , and in other places. All the code source is on the git. There are working example running.

You have all the low level code and explanation in the PMs.

Now if you can't understand my grammar, don't have time to read my code, and your idea is to start to program a blockchain and api for distributed application alone from scratch including high level OO interfacing, good luck with that Smiley I'll see where you get lol

If you are not interested to work on collaboration, again ok, i can have idea how to handle most of the issue you rise on the git discussion, with local stack frame, circular references, multi threading, no memory leak, asynchronous events with generic function declaration , compatibility with typescript/json and javascript objects, list /  array processing.
795  Alternate cryptocurrencies / Altcoin Discussion / Re: Do you think "iamnotback" really has the" Bitcoin killer"? on: April 05, 2017, 10:31:43 AM
I asked you to please:

For example, I have no idea why you are serializing to JSON textual format in your C code and not just passing (little or big endian) binary data that is compatible with C structs. And please don't write another long explanation that I can't really understand your descriptions well. You try to say too many things at the same time, and it is ends up not being that comprehensible.

And you still wrote another wall of text:


There is two format for the "json data".

There is the "text form" , coming from json/RPC (from javascript app)

And there is the "binary form", a tree of referenced objects, constructed from a text json, or manually with the API. ( or eventually from a script)

The idea is to allow to completely abstract function calling and parameters passing using a format that can be converted to/from json transparently, automatically, with object reference counting, and stored internally as binary data aligned on 128 bits for fast SIMD operations.






The same function prototype in C can be used to declare different functions with different input parameters.

Like a generic function type who can be implemented with different type of data format for arguments, and to define the json/rpc API from the C side.

And in the same time, it's also the C internal API , used by C programs like a regular dll API, but using the json binary form representation of the arguments, instead of C style argument passing ( the C style argument passing are only used for simple native C type and pointers).



796  Alternate cryptocurrencies / Altcoin Discussion / Re: Do you think "iamnotback" really has the" Bitcoin killer"? on: April 05, 2017, 10:24:47 AM
Code:
For example, I have no idea why you are serializing to JSON textual format in your C code and not just passing (little or big endian) binary data that is compatible with C structs. And please don't write another long explanation that I can't really understand your descriptions well. You try to say too many things at the same time, and it is ends up not being that comprehensible.

There is two format for the "json data".

There is the "text form" , coming from json/RPC (from javascript app)

And there is the "binary form", a tree of referenced objects, constructed from a text json, or manually with the API. ( or eventually from a script)

The idea is to allow to completely abstract function calling and parameters passing using a format that can be converted to/from json transparently, automatically, with object reference counting, and stored internally as binary data aligned on 128 bits for fast SIMD operations.

There is reference counting, thread safety for arrays manipulation, etc, Internally , it use native C type to store/read the data, keyname hash, and the tree structure itself (list of named/typed children references etc) , direct memory pointers, etc.

The memory holding the native C type, and direct pointer to it is never accessed from the application directly. ( or it can be explicitly if you want raw pointer to the binary data, equivalent of rust unsafe code).


When a C function make the call

the caller create the tree manually in the binary form, from the binary data, instead of using a structure . ( it can be transformed to/from textual json with a generic function for the json/rpc interface ).

it pass a C pointer to the reference of this object stored in binary form to the callee function. (And another C pointer to an object reference for the output result if needed.)

The function can access all the tree, add or remove element to it, add an array to it, add element in a child array, copy object reference in local permanent internal list, etc

On return either if the function is just supposed to modify content of the passed object, or add/remove element to/from it, all can be tested and accessed via the tree interface from the caller ( and it just passed a C pointer, with total binary compatibility, useful also for logging or many things instead of vargs / stdio).

If the function return something to the caller ( eg from http/json/rpc request), then this result can be converted to textual json automatically and sent back to the javascript caller from the http server.


The same function prototype in C can be used to declare different functions with different input parameters.

Like a generic function type who can be implemented with different type of data format for arguments, and to define the json/rpc API from the C side.

And in the same time, it's also the C internal API , used by C programs like a regular dll API, but using the json binary form representation of the arguments, instead of C style argument passing ( the C style argument passing are only used for simple native C type and pointers).

It can be confusing from your perspective, because it's made in sort that high level concept are manipulated and implemented from the C language, and can use all the C language on local C variable with the native C type ( int , pointer, etc) , and also in the same time to manipulate high level object.
797  Alternate cryptocurrencies / Altcoin Discussion / Re: Do you think "iamnotback" really has the" Bitcoin killer"? on: April 05, 2017, 07:08:28 AM
Segwit still seem less 666 in some aspect, because it's more deterministic, and remove some burden from the reward / pow / emission scheme, but it's maybe 666 in other area on the line of fractional reserve issue, and the illusion with having duplicated btc off chain being used indifferently, but still not always being able to put everything back at any time, and it's not totally equivalent, and im not sure they are very honest on all the intended side effect on the network.
798  Alternate cryptocurrencies / Altcoin Discussion / Re: Do you think "iamnotback" really has the" Bitcoin killer"? on: April 04, 2017, 11:11:30 AM
Bitcoin is total 666  Grin Grin
799  Alternate cryptocurrencies / Altcoin Discussion / Re: Do you think "iamnotback" really has the" Bitcoin killer"? on: April 04, 2017, 10:51:47 AM
Satoshi's PoW is diabolical.

I kinda agree  on this  Grin

I don't sort people in nwo and non nwo, but I wouldnt put satoshi in the humanist/altruiste kind that much Cheesy

I did my little equation also with this side lol

Already important fact, pow is probabilistic, so it's like gambling, most religion would say expecting reward based on probabilistic things is not very good Smiley all true humanist knows lack of order and chaos always favor those who have more. In casino you have more chance to make profits with larger fund, cause you have more chances to trick probabilities on the long run. It's why mining get pooled, and centralized. If people would organise like this to trick casino im sure it bankrupt for them Cheesy

And also the way the code is made and all, I think he is someone who is used to code for profits, not that it's bad in itself, but it still lead to lack of care of many aspect of developpement or code, to get the max profits in short time. Well cant say he looks like someone  involved with open source project, linux development, or in the collaborative developpement GPL profile. Most likely someone who come from banking or financial sector, who code financial app of some sorts.

He doesnt look to have the kind of human wisedom and altruist attitude too much, and be more motivated by profit ( devil) and probabilistic logic (devil , god is order), and have very materialistic takes on things.

Well there can be more paranoiac theory too Cheesy
800  Alternate cryptocurrencies / Altcoin Discussion / Re: DECENTRALIZED crypto currency (including Bitcoin) is a delusion (any solutions?) on: April 04, 2017, 09:39:33 AM

Well, my idea is that if 99% of humanity could drop dead by tomorrow, except my friends and family, and those that do useful things for me, that would be a positive thing for me, because the 70 million people remaining would have much more resources at their disposal than the 7 billion idiots running around on this earth.

Life on earth would be a dream again, with 70 million of us.


This is entierely false. Level of life and energy available /  head grow exponentially with global population growth, steady. Place with high density of population always have more manufactured ressource at there disposal.


That is first of all only true as long as economic activity is not resource limited ; and secondly, this is mostly also the case because the concentrated areas (the city) are applying extortion on the less dense areas (the countryside), that is, there is a structural flux of wealth from less dense areas to provide dense areas with the stolen wealth ; mainly because the top layers of hierarchy are in the denser areas.

On the other hand, you are right, that structural investments are much more efficient in *sufficiently* dense areas (a road is optimally efficient for a given traffic ; too much traffic jams it, but too low a traffic makes it too expensive per amount of traffic), and costs that scale with distance are also lower in denser areas.

However, I wasn't talking about "putting a human every 20 km" or something like that.  You can very well have a few cities with 70 million people. But now we have very, very high environmental costs that orient a huge amount of our production value into being energy efficient, low-polution etc...  In fact, we can't sustain that, which is why we have to have most of our industrial production in India and China where the environmental costs are still lower.  This is a cost that is entirely gone when we divide humanity by 100.  We can drive cars that consume 50 l / 100 km, without a problem, and we have petrol for the coming 500 years or more.  We don't have to get nervous about climate change.  We can have large forests and still have all the (mechanised) agriculture we'd like.

Our economy is being hurt by resource limits since about 20-30 years and it won't improve.   South America and Africa are being urbanized at the cost of huge parts of nature, in a totally non-sustainable way.  Until not so long ago, labour was a resource and the more people there were, the more labour offer there was ; it is fading away.  

We are now hitting a resource limitation which is strangling us in the same way bitcoin's 1MB block size is strangling it.  This has never happened before, because when it happened in western countries, we could colonize, and afterwards, we could delocalize industrial production.  We can't any more.  

This is normal.  Every population is limited by resources if it grows.  We could push limits some time.  We still can a bit.


Well it's well known fact last century economy switched from primary sector (resources based) to tertiary sector  ( service based), and maybe as iamnotback says, switching to knowledge based, in which case it's hard to say that more human is not equivalent to more knowledge potential, and so more wealth.

Most of the things you buy are services or manufactured product, the economy is more there than into Apple or iron or Woods.

Humans can always adapt to new situation, and find way to increase ressources production, and lower cost, increase efficiency, transport etc, more the way to go Wink

Actually amartya sen studied many cases of famines around the world, it's rarely due to food shortage. All the time the problem is access the resources more than producing them, and it's related to free market thinking when some economic sector collapse or social problems, and some people are prevented access to the ressources. It's how free market become oppression in reality. The problem is rarely production capacity.

Supermarket are full, gas pump are full, not really the main pb.

The 1mb is strangling it, but also the same reason also make it more alive. And sign of growth. But it need to adapt to increase efficiency to fit the current level of energetic cost . But humans are able to do this Smiley
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 [40] 41 42 43 44 45 46 47 48 49 50 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!