Bitcoin Forum
July 12, 2024, 07:51:40 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: [1]
1  Alternate cryptocurrencies / Announcements (Altcoins) / Re: $XAI Sapience AIFX - Decentralized AI | 11% PoS | PlumeDB,IBTP on Testnet on: April 06, 2015, 08:25:51 PM
Share your support for XAI by adding the signature below.

Give more people get the chance to find out in other parts of the forum by increasing Sapience visibility.

Testing

To get sig below to get it to link to this thread.

****EDIT****

Woot it works

Below is updated sig code with working link.

Code:
[center][url=https://bitcointalk.org/index.php?topic=864895.msg9591380#msg9591380]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]
[color=orange][size=12pt]◍ [/size][/color][size=12pt][b][color=#0B610B]I,[/color] [color=#0B610B]XA[/color][color=#0B610B]I.[color=#74DF00]And you?[/color][/b][color=orange][size=12pt] ◍[/size][/color]     [size=13pt][size=7pt][color=red]◍[/color][/size][size=9pt][color=red]◍[/color][/size][size=11pt][color=red]◍[/color][/size] [color=#013ADF]Take [/color][color=#0040FF]Part [/color][color=#2E64FE]In[/color] [color=#5882FA]Sapien[/color][color=#2E9AFE]ceAIFX [/color][color=#5882FA]Intelligent [/color][color=#2E64FE]Block[/color][color=#0040FF]chain [/color][color=#013ADF]Techn[/color][color=#013ADF]ology [/color][size=15pt][size=11pt][color=red]◍[/color][/size][size=9pt][color=red]◍[/color][/size][size=7pt][color=red]◍[/color][/size][/size][/size][/size]
—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red]—[/color]—[color=red][/url][/center]
2  Alternate cryptocurrencies / Announcements (Altcoins) / Re: $XAI Sapience AIFX - Decentralized AI | 11% PoS | PlumeDB,IBTP on Testnet on: April 01, 2015, 11:15:31 AM



There is a huge amount of kinetic potential to be unleashed in Phase 4.

Sapience AIFX is massively underrated and luckily for people with half a brain it's so cheap right now to accumulate.

Phase 1 check!
3  Alternate cryptocurrencies / Announcements (Altcoins) / Re: $XAI Sapience AIFX - Decentralized AI | 11% PoS | PlumeDB,IBTP on Testnet on: March 22, 2015, 02:18:07 PM
Cedric, any news man?? Can you please give us an update on things? I truly appreciate it!!

From the last news reported in Slack he is very nearly ready to drop the AI core.
4  Alternate cryptocurrencies / Announcements (Altcoins) / Re: $XAI Sapience AIFX - Decentralized AI | 11% PoS | PlumeDB,IBTP on Testnet on: March 11, 2015, 09:45:18 PM
If this not raising the red flags, what does? making me giggle

That's actually demonstrating what XAI is being built to be capable of doing.

But anyway seeing as you are just here to troll might as well point out that it sounds like you've been scammed too many times because of your own past poor judgement... oh wait... you have! hahaha
5  Alternate cryptocurrencies / Announcements (Altcoins) / Re: $XAI Sapience AIFX - Decentralized AI | 11% PoS | PlumeDB,IBTP on Testnet on: March 11, 2015, 09:09:37 PM

Looks like a scam! You dumb peasants fall for this tricks?HuhHuh I laugh

Would you be able to explain why it looks like a scam and point out in more detail what part of the project so far would lead you to believe the dev for XAI is not capable of delivering.
6  Alternate cryptocurrencies / Announcements (Altcoins) / Re: $XAI Sapience AIFX - Decentralized AI | 11% PoS | PlumeDB,IBTP on Testnet on: March 10, 2015, 11:00:31 PM
Think of the possibilities with Phase 4 complete and neuromining is actually happening.

Harnessing ALL that unused GPU / dead processing power for the greater good to provide analytical, number and code crunching solutions faster than ever before.

7  Alternate cryptocurrencies / Announcements (Altcoins) / Re: $XAI Sapience AIFX - Decentralized AI over coin network | Now on Coinwallet.co on: February 12, 2015, 08:31:45 PM
Weekly Consolidation Update 3 Containing Communications Direct From XAI Dev


Previous consolidations:

Week 2 : https://bitcointalk.org/index.php?topic=864895.msg10369861#msg10369861
Week 1 : https://bitcointalk.org/index.php?topic=864895.msg10301510#msg10301510

Following on from last week, PlumeDB is now on testnet for slack group. The distributed database which serves as the base layer for all future AI applications is now finished and will be available for public consumption once initial debugging is completed.

There has been a ton of talk happening in the slack group and below is a snippet of that, which hopefully contains some of the more relevant parts.


Development notes:

What i'm working on right now is, I added a wrapper method for logging from within the plume related classes, that first notifies the UI through a signal I added on uiInterface and now i'm adding a tab in the UI so that it pulls that and puts it in a list so that we can see within the wallet that its "doing stuff" behind the scenes. There will be a log tab and a messages tab, so that we can watch things happening in real time, the log tab is just general activity, the messages tab are p2p messages sent and received from nodes it'll just make it a lot easier to a) let people know that "something is happening" and b) trace and troubleshoot stuff.

I want each peer to pass their count of all dht entries they got so i added a field on the message that gets broadcast, and i added a column in the UI on the peers tab, but to get the count, you can't just call Count() on the database.  I just had to write my own little method that has to iterate the entire database which isn't a great way to do it...when you're broadcasting the message every 10 seconds.


Keeping Sapience Safe:

We are going to put Sapience on the moon.

https://twitter.com/SapienceAIFX/status/564976214642032640

"We choose to go to the moon." #SapienceSpaceProgram #LunarNode project. More info roadmap/mission calendar soon. $XAI #DreamBigger #CubeSat


Roadmap:

The immediate roadmap is to get the AI core out for sapience, then the asset market, xaishares and another round of fundraising, and then wallet rewrite as a more modern and easy to use app across all the platforms.


Coming online and what appears to be a reference to the character from the movie Transcendence:

pinn@aifx ~/Sapience/src $ strip sapienced
pinn@aifx ~/Sapience/src $ ./sapienced
pinn@aifx ~/Sapience/src $ Sapience server starting

pinn@aifx ~/Sapience/src $ ./sapienced plmpeerinfo
[
]
pinn@aifx ~/Sapience/src $

Daemon compiled, rpc did something, no peers yet but...its alive


How to set your own fees as a node:

You can override the default fees in the .conf file with :

datareservationrate
dataaccessrate


Locating other Plumes via RPC during testnet debugging:

{
"myplume" : {
"plumeid" : "454f3d0395828f802236d2fa934389806eece2514685414f97f4eb98b6c674f2",
"plumename" : "Test"
},
"myplume" : {
"plumeid" : "a46d5bed6e07279cc388b6776bf05b3f767cbd43d3f29e9a948ca4b7b825dc0d",
"plumename" : "Help Test"
},
"myplume" : {
"plumeid" : "e918926a2a7f25c507c6dd97c206eb1ae60d10a3822c42372434ea7c183e065e",
"plumename" : "test"
},
"publicplume" : {
"plumeid" : "454f3d0395828f802236d2fa934389806eece2514685414f97f4eb98b6c674f2",
"plumename" : "Test"
},
"publicplume" : {
"plumeid" : "f9e2ab418ef99a1e11cc63770f1596d4b1acde7e1148bc4e065c4bb37fe9991e",
"plumename" : "Test Plume 1"
},
"publicplume" : {
"plumeid" : "fb137f00b5ee1d4320d07abe818535d7354d2276d256e4ef586a656c1da0224b",
"plumename" : "Test Plume 2"


First results from testing look encouraging:

So some UI issues and a few other things to tweak... but so far its working far better then i expected tbh.

The hardest part is the peering, making sure the peers see each other and respond to messages etc. This data proposal thing is weird, so the props come back, you accept them, it should then raise a signal to the ui that the plume is alive now, and it should then get added to the table. Also when you restart the wallet...i have to put something in the InitializePlumeCore so that it sends the signals to the UI for each record it reads off disk. It also should probably only put your plume in only one of the maps, if its a public queue its showing up both in the public map and the "my" map.

Behind the scenes it is tracking everything in these core maps and sets:

CCriticalSection cs_plumecore;
std::map<uint256, CPlumeHeader> mapMyDataPlumes;                      // map of data plumes this node originated
std::map<uint256, CPlumeHeader> mapMyServicedPlumes;                  // map of data plumes this node is a Neural Node for
std::map<uint256, CPlumeHeader> mapPublicDataPlumes;                  // map of public data plumes
std::map<uint256, CDataReservationRequest> mapReservationsWaiting;    // data reservation requests sent, awaiting proposals
std::map<uint256, CDataReservationProposal> mapProposalsReceived;     // data reservation proposals received, awaiting acceptance
std::map<uint256, CDataReservationProposal> mapProposalsWaiting;      // data reservation proposals waiting for client to accept
std::map<uint256, CDataProposalAcceptance> mapProposalsAccepted;     // data reservation proposals accepted
std::map<uint256, int64_t> mapPeerLastDhtInvUpdate;                   // last time we requested infohash inventory from the peer
std::map<uint256, std::vector<uint256> > mapPeerInfoHashes;           // list of peers for each infohash
std::set<uint256> setServicedPlumes;                                  // plumes I am a Neural Node for
std::map<uint256, CDhtGetResponse> mapGetResponses;                   // responses to get requests

std::map<uint256, std::vector<CDataChunk> > mapChunksWaiting;                       // chunks awaiting use


There's some more rpc commands i'm working on like the chunks thing which will let you walk a data set in chunks of 100 at a time. Tere is an api class CPlumeApi which backs the rpc calls and that the python shell should bind do / be the primary interface to the data functionality i had to sprinkle stuff like this around so it doesn't blow up on android:


#ifndef ANDROID
   PlumeLog("%s %s %s %d", msgType.c_str(), peerAddress.c_str(), msg.c_str(), sizeKb);
#endif


Android is like hyper-sensitive to activity on the UI thread, would be nice to get 8^3 nodes but i guess that's unrealistic for testing... since at the lowest level its seeking to penetrate 8*8*8 nodes as an infohash propagates so like, being able to test it "at scale" would be cool, but is unlikely.


Conclusions from JoeMoz after first round testnet:

This is nice...we have an overlay network, and it functions! Smiley

8  Alternate cryptocurrencies / Announcements (Altcoins) / Re: $XAI Sapience AIFX - Decentralized AI over coin network | Now on Coinwallet.co on: February 09, 2015, 01:05:16 AM
Friendly bump of the dev related info that has been buried under much noise.

Next weekly coming on Thursday 12th.

Weekly Consolidation Update 2 Containing Communications Direct From XAI Dev:

Previous consolidation (Week 1) : https://bitcointalk.org/index.php?topic=864895.msg10301510#msg10301510


There has been much progression since last week, most notably work towards an update that is due to be released very soon.

The following is a range of information that JoeMoz (XAI developer) has provided in various sources including Slack.


In relation to comments about him listing SharePoint on his LinkedIn profile:

SharePoint just happens to be one thing I am an expert in... and focused on for a couple years because frankly its the highest billable rates :wink:


Age old Opensource / Closedsource dilemma continues:

I mean the tech we are doing is sufficiently advanced that anyone who would try to rip it off right out of the gate would probably wind up pushing broken stuff.
From an open source perspective, it would probably be a valuable contribution to the community at large, like DHT but i remember with XQN, within a few days there were a couple of clone coins with the profit explorer graph feature etc.


Choosing a Name for the Data Layer:

I'm thinking of ditching the treespaces name i came up with, because it generates confusion with the biology stuff when you google it. I think i am going to call the entire data layer PlumeDB, because it is something that can be broken out and marketed for a lot more than just the AI stuff and it is essentially a full blown decentralized database engine running on top of the coin network funneling network communications over the bitcoin p2p protocol.


Where does the name plume come from?

I thought it sounded cool! I was thinking of clouds of data so "databases" are plumes in the cloud.


More notes on fee structure:

The way I am doing fees right now is..it is based on data reservations by kilobyte-hour.  e.g. 0.0001 XAI/kbh when you initialize a new data plume, you put out a request for data reservations based on the estimated size of the database and the lifetime you want. You get responses from volunteer/available slave nodes that are willing to replicate your data index, and you have to pay the total kbh fee to each of the slaves by default  Cool

The actual data itself at a low level is distributed across all nodes via filesystem abstracted into DHT and all nodes participate for public data sets, you don't pay for data reservations, but consumers pay-per-use.

So, I am thinking maybe there should be a small fee for loading public data, as an anti-spam type of measure. The other thing i am thinking is that maybe all Atom data types (opencog atomspaces), should be public, to keep contributing to a large body-of-knowledge, seems maybe pointless to load Atoms that are private...hmm, so who should get the fee for public data loads... maybe the 1st relay node.


How is data loaded onto the Sapience distributed AI network?

You load data either through RPC or the console.

More in-depth infomation related XAI and PlumeDB:

http://wiki.dfx.io/display/XAI/Sapience+AIFX+Home

http://wiki.dfx.io/display/XAI/PlumeDB


Hint on Potential New Look:

I'm doing something cool with it and doing the UI in QML/QT Quick, eventually i want to rewrite the entire wallet using it and ditch the existing hokey interface.  It'll let us get a "responsive" UI on the android/different devices so stuff isn't sitting off the side of the screen etc., and get a wallet that actually looks like a modern app and not something from 1998.


Further development related comments:

Although in the latest sapience source i have moved the leveldb dependency out from being code included in the source to being external dependency so i can use latest google leveldb from github + cross-compile for android easily. Just means an extra step, have to pull and compile leveldb separately and set the include/lib path. I'm trying to get it so i can use 1 .pro file i suppose on linux you could just install the libleveldb-dev package and do it that way, same as libboost-all-dev etc.

I should note that this is a test/beta build... so there's definitely TODO's etc. Under the hood tweaking and stuff that we might want to adjust like by default the low level DHT that is just doing mindless raw data storage will tries to get a penetration of 72 nodes for a given record but i don't know if we even have 72 live nodes on the network, let alone getting them on testnet and there's rebuild scenarios like what if all 8 slave nodes go down, having the network detect that and automatically assign replacements and rebuild trie indexes, etc.

There's really like 4 overlay networks running on top of each other at once... the low level DHT, a mapping & rebuild info DHT, the slave PHT's, and the master/originator the biggest thing we'll have to play around with in testing is seeing what the latency is like i know there will be latency, because it is a p2p database so for some use cases it might not be suitable, for others it might mean just adapting how you work with the data.

The more nodes the better... if we had a couple thousand nodes on the network for instance you can do better load balancing but just in general, basically _everything_ in the entire system is async.

Its just different i guess, from anything i've worked with to date at least :wink:  should enable new scenarios actually a good analogy is using it is more like hitting web services asynchronously, instead of direct database access... but in exchange, you get the redundancy/massive scale-out/decentralization.


Is there anyway for the system to determine what nodes are closest?

Well, there isn't any geolocation /location based proximity right now... but that is something i've been thinking about.


Uniqueness?

As far as i know, this is the only (DHT) implementation that is running over the bitcon p2p protocol, even maidsafe DHT is a parallel/external implementation running over UDP.


Is that useful for anything outside of AI?

There's like a million other things besides AI you can build on top of a decentralized DHT.


Process:

The way i did it is each key in addition to a hash can have 3 attributes, and those get indexed by the slaves in PHTs so that is the value-added service you are getting from the slaves in exchange for the XAI/kilobyte-hour fee. It's key/value storage on the low raw DHT level but with 3 attribute components in the key. So like lets say i want to run a distributed aggregate range query to do a SUM across the data plume where attribute 2 is between A and B... the slaves provide the service of fast lookup to get the subset of infohashes that fall within the criteria, then you use those against DHT2 to get the possible nodes that have each one, that are then looked up against DHT1 to retrieve the individual records/values from the raw key/value store....So with something like a SUM, you slave can pull the subset of infohashes and then chunk them out into groups of say 100 and dole those out as individual compute operations across nodes, and then the results are concentrated and returned to the originator.


DHT1 and DHt2 are just levels within the 4(?) DHTs in PlumeDB?

Layered dht's... yeah, sort of i mean, like for something like a torrent its pretty trivial, so a basic k/v dht works fine but as soon as you want to do anything more involved, you need more metadata etc.so you layer it on top. the raw DHT gives you the redundancy/resilience, etc. and can just focus on getting the values where they need to be


In response to a question on Slaves.

Slaves are responsible for building that multi-rooted PHT the DHT only knows about hash256 + value...there's TODO's i'm probably not going to get to for the release tonight (today), like being able to configure how much of your free space you want to allocate so don't go loading gigantic data sets Tongue



9  Alternate cryptocurrencies / Announcements (Altcoins) / Re: $XAI Sapience AIFX - Decentralized AI over coin network | Now on Coinwallet.co on: February 05, 2015, 08:49:49 PM
Weekly Consolidation Update 2 Containing Communications Direct From XAI Dev:

Previous consolidation (Week 1) : https://bitcointalk.org/index.php?topic=864895.msg10301510#msg10301510


There has been much progression since last week, most notably work towards an update that is due to be released very soon.

The following is a range of information that JoeMoz (XAI developer) has provided in various sources including Slack.


In relation to comments about him listing SharePoint on his LinkedIn profile:

SharePoint just happens to be one thing I am an expert in... and focused on for a couple years because frankly its the highest billable rates :wink:


Age old Opensource / Closedsource dilemma continues:

I mean the tech we are doing is sufficiently advanced that anyone who would try to rip it off right out of the gate would probably wind up pushing broken stuff.
From an open source perspective, it would probably be a valuable contribution to the community at large, like DHT but i remember with XQN, within a few days there were a couple of clone coins with the profit explorer graph feature etc.


Choosing a Name for the Data Layer:

I'm thinking of ditching the treespaces name i came up with, because it generates confusion with the biology stuff when you google it. I think i am going to call the entire data layer PlumeDB, because it is something that can be broken out and marketed for a lot more than just the AI stuff and it is essentially a full blown decentralized database engine running on top of the coin network funneling network communications over the bitcoin p2p protocol.


Where does the name plume come from?

I thought it sounded cool! I was thinking of clouds of data so "databases" are plumes in the cloud.


More notes on fee structure:

The way I am doing fees right now is..it is based on data reservations by kilobyte-hour.  e.g. 0.0001 XAI/kbh when you initialize a new data plume, you put out a request for data reservations based on the estimated size of the database and the lifetime you want. You get responses from volunteer/available slave nodes that are willing to replicate your data index, and you have to pay the total kbh fee to each of the slaves by default  Cool

The actual data itself at a low level is distributed across all nodes via filesystem abstracted into DHT and all nodes participate for public data sets, you don't pay for data reservations, but consumers pay-per-use.

So, I am thinking maybe there should be a small fee for loading public data, as an anti-spam type of measure. The other thing i am thinking is that maybe all Atom data types (opencog atomspaces), should be public, to keep contributing to a large body-of-knowledge, seems maybe pointless to load Atoms that are private...hmm, so who should get the fee for public data loads... maybe the 1st relay node.


How is data loaded onto the Sapience distributed AI network?

You load data either through RPC or the console.

More in-depth infomation related XAI and PlumeDB:

http://wiki.dfx.io/display/XAI/Sapience+AIFX+Home

http://wiki.dfx.io/display/XAI/PlumeDB


Hint on Potential New Look:

I'm doing something cool with it and doing the UI in QML/QT Quick, eventually i want to rewrite the entire wallet using it and ditch the existing hokey interface.  It'll let us get a "responsive" UI on the android/different devices so stuff isn't sitting off the side of the screen etc., and get a wallet that actually looks like a modern app and not something from 1998.


Further development related comments:

Although in the latest sapience source i have moved the leveldb dependency out from being code included in the source to being external dependency so i can use latest google leveldb from github + cross-compile for android easily. Just means an extra step, have to pull and compile leveldb separately and set the include/lib path. I'm trying to get it so i can use 1 .pro file i suppose on linux you could just install the libleveldb-dev package and do it that way, same as libboost-all-dev etc.

I should note that this is a test/beta build... so there's definitely TODO's etc. Under the hood tweaking and stuff that we might want to adjust like by default the low level DHT that is just doing mindless raw data storage will tries to get a penetration of 72 nodes for a given record but i don't know if we even have 72 live nodes on the network, let alone getting them on testnet and there's rebuild scenarios like what if all 8 slave nodes go down, having the network detect that and automatically assign replacements and rebuild trie indexes, etc.

There's really like 4 overlay networks running on top of each other at once... the low level DHT, a mapping & rebuild info DHT, the slave PHT's, and the master/originator the biggest thing we'll have to play around with in testing is seeing what the latency is like i know there will be latency, because it is a p2p database so for some use cases it might not be suitable, for others it might mean just adapting how you work with the data.

The more nodes the better... if we had a couple thousand nodes on the network for instance you can do better load balancing but just in general, basically _everything_ in the entire system is async.

Its just different i guess, from anything i've worked with to date at least :wink:  should enable new scenarios actually a good analogy is using it is more like hitting web services asynchronously, instead of direct database access... but in exchange, you get the redundancy/massive scale-out/decentralization.


Is there anyway for the system to determine what nodes are closest?

Well, there isn't any geolocation /location based proximity right now... but that is something i've been thinking about.


Uniqueness?

As far as i know, this is the only (DHT) implementation that is running over the bitcon p2p protocol, even maidsafe DHT is a parallel/external implementation running over UDP.


Is that useful for anything outside of AI?

There's like a million other things besides AI you can build on top of a decentralized DHT.


Process:

The way i did it is each key in addition to a hash can have 3 attributes, and those get indexed by the slaves in PHTs so that is the value-added service you are getting from the slaves in exchange for the XAI/kilobyte-hour fee. It's key/value storage on the low raw DHT level but with 3 attribute components in the key. So like lets say i want to run a distributed aggregate range query to do a SUM across the data plume where attribute 2 is between A and B... the slaves provide the service of fast lookup to get the subset of infohashes that fall within the criteria, then you use those against DHT2 to get the possible nodes that have each one, that are then looked up against DHT1 to retrieve the individual records/values from the raw key/value store....So with something like a SUM, you slave can pull the subset of infohashes and then chunk them out into groups of say 100 and dole those out as individual compute operations across nodes, and then the results are concentrated and returned to the originator.


DHT1 and DHt2 are just levels within the 4(?) DHTs in PlumeDB?

Layered dht's... yeah, sort of i mean, like for something like a torrent its pretty trivial, so a basic k/v dht works fine but as soon as you want to do anything more involved, you need more metadata etc.so you layer it on top. the raw DHT gives you the redundancy/resilience, etc. and can just focus on getting the values where they need to be


In response to a question on Slaves.

Slaves are responsible for building that multi-rooted PHT the DHT only knows about hash256 + value...there's TODO's i'm probably not going to get to for the release tonight (today), like being able to configure how much of your free space you want to allocate so don't go loading gigantic data sets Tongue


10  Alternate cryptocurrencies / Announcements (Altcoins) / Re: $XAI Sapience AIFX - Decentralized AI over coin network | Now on Bittrex, EmpoEx on: February 03, 2015, 01:19:01 PM
A summary of Sapience dev related info will be posted on Thursday.

Below is a recap of the previous weeks in January.

Get to know Joe Mozelesky, lead developer of Sapience:

The following a selection of various info that is readily available and was provided by Joe throughout January 2015 related to Sapience development.


Potential End User Case Scenario:

Pattern analysis and fraud detection algorithms, that is the kind of thing that you could run on XAI.

The other real world application i can think of for this stuff... I used to do a lot of work on power grids.  I could see real world use cases for both capacity need prediction, preventative maintenance predictive analytics for generation owners.  I did some work for a company that operates huge wind farms...there's another application - predicting supply based on weather forecasts combined with turbine efficiency etc.

I could see stuff where you have applications built on StreamInsight or oracle's CEP engine where there is an adapter plugged in to simultaneously  send the data over to the XAI network to keep feeding it real time streamed data to run continuous analytics on like financial trading engines, etc.

Then there's more mundane stuff... like how about building a feed from your accounting system to send inventory data/sales data into it, and using AI to make re-order recommendations for purchasing, depending on your business you might combine that with weather data, data from futures/commodities markets, new housing starts, etc.

Supply chain stuff, like having an AI that is analyzing news feeds and can predict shortages of particular raw materials and then send you alerts saying hey potentially in 2 months your supplier A isn't going to be able to make quota because of a shortage of XYZ material, start bringing another factory online or substitute a different raw material.

There's all kinds of applications where traditionally a mid-market company might be using desktop BI (business intelligence) software and basic rules engines to do basic analytics, simply because they don't have access to more powerful tech.

Accessibility Ideas:

What would be really cool is if somebody manages to miniaturize a pico projector to fit in a Android Wear smartwatch...so you could say something like "Sapience Visualize" and it projects ai network activity map onto the wall etc. I have to look into how hard it is to do android notifications from QT...would be pretty cool in addition to being able to say AI in your pocket - AI on your wrist.  Those wearables consume notifications over bluetooth.

Related to Android QT development:

Only thing I can think to try is rebuilding it with an older toolchain... i built it with 4.9, but maybe will try 4.8.  It seems to be a memory access violation when its trying to invoke a method, so that makes me think it is the linker issue, apparently there is a bug where the linker does not order the loads correctly, although when watching the LogCat it did load gnustl first...looks like the call table must be wrong/off.  Its trying to invoke uin256 constructor and bombing out with the access violation, so it must have the wrong address for it. To build the apk you have to build the dependencies using the same android toolchain... that requires jumping through various hoops.  i built most of them manually but to try again i'd probably try to build them all using the ndk-build build system in the android ndk e.g. boost, openssl, miniupnpc, bdb.
mixed and matched stlport and gnustl in leveldb. I rebuilt the dependencies using android ndk standalone toolchain and made sure all of them referenced gnustl and used the standalone toolchain deployed using the shell script in the ndk, as opposed to trying to build it with all the paths set into the ndk itself, also used the latest leveldb directly from github.


Leveldb pulled out of .pro file and compiled as a separate dependency.

GitHub
CedricProfit/SapienceAndroid
Contribute to SapienceAndroid development by creating an account on GitHub.
https://github.com/CedricProfit/SapienceAndroid

Talking Android Wallet v1.0 Ready:

https://play.google.com/store/apps/details?id=com.blockchainsingularity.apps.sapience_qt


Quotient PoS:

I added some auto-optimization of block sizes, so there is a new setting on the profit explorer tab where you can input your preferred "block" size and when staking it will split your blocks down if they are bigger. It won't directly divide it up into chunks at your block size - I originally coded that and it breaks all of the PoS checks-and-balances, the PoS stuff all assumes a max of 2 outputs so it will split in 2, and combine in a way that seeks your preferred size. Its nice anyway, you can just set it to 367 or whatever the magic number is, and forget about it/let it run.

I've been working on some other code to allow you to break down your blocks all at once, originally i was going to have this happen automatically on a thread but i realized this will saturate the capacity of the network if 1,000 wallets are all generating 100 tps and from the bitcoin debate with gavin we know the theoretical maximum is 7tps network wide on bitcoin....  we have 1/10th the block time so theoretically we can handle 70tps network wide.
So i need to think of what the best way is to do it, either throttle it so it only does so many per minute, or have a button...or just leave it and let people do it manually in coin control leaving it alone might be fine, as the staking mechanism will slowly optimize it over time anyway i've been running it this week on my wallet just to make sure everything is ok and its been working good for me.


Thinking Ahead XAI:

One of the things i have been thinking about solutions for is dealing with "bad" or unwanted data on the network... so the idea is that on Sapience you can load private data just for running your own algorithm scripts on, but also be able to mark some data sets as public and share them... the only issue with that is people who decide to "troll" the network and load crap onto it... i was thinking a simple solution at first is to provide an interface so on your node you can just ignore/reject data you don't want to host parts of.
As far as specific algos...what I am working towards is that the sapience platform will work sort of like pine scripts in tradingview, where there is a set of generalized services and then specific algos or operations are scripted via a kind of markup... so this way it is adaptable/not limited by what is hardcoded.

What i am trying to get to is something that is sort of like a cross between AIML and opencog etc.

Performance with latency/etc. may present some challenges, preventing things from stalling out, being resilient if there are long running operations across multiple nodes and one or a couple of them go down.

Operating Costs:

Micro fees...  sort of like ethereum where there is a cost associated with each operation within a "contract"...  what i am thinking of is that a "run" would get associated with a transaction and signed by the submitter, and the transaction amount will have to be enough to cover the sum of the operation fees. we will need to think about what fees make sense for both data handling and computation time/operations. If you are running analytics off a real time feed from your e-commerce website for instance, you might want to feed a stream and only need each "unit" of data to have a lifetime of a few minutes, vs. loading some trade data for backtesting where you might want to store it for a couple days or longer.


Thoughts about how Sapience will pan out and believing in his project:

I really believe this is going to be a game changing platform...because right now your only real alternative options are Microsoft and Google, at high cost... there's a handful of startups but they aren't live yet.


Is XAI like SkyNet?

Heh, that thing isn't even in the same ballpark as to what xai is putting together...i was looking at it and the token calls itself decentralized, and yet its a combination of a local server running on the end user environment + their proprietary centralized "cluster"/db of data. Its interesting for what it is but its not really solving the same problem.

Plagiarism Concerns:

My biggest fear when i start publishing the data architecture design etc. on the wiki is that its going to get ripped off or show up regurgitated somewhere... i guess there's not much that can be done about that though other than just build a better product.

Data:

The data platform for Sapience is pretty generalized, so although I'm building it specifically to support back-ending the AI stuff, in practical application the XAI platform could be leveraged for all kinds of stuff down the road, it all comes down to writing adapters basically... maybe another analogy is its like object oriented programming, or component-based development.... As far as data throughput... right now the way the existing low level stack in the bitcoin p2p code works, i would say expect something closer to DSL
but there is a lot of room for improvement, the existing stack is pretty basic and has arbitrary caps/throttling in it... Thats why i was saying in the last video, we can circle back around to do optimization later...because there is plenty of room for it.

For AI work, the speed cap isn't too much of an issue, because essentially we are dealing with operating on streams anyway.

For me it is easy to think about it that way, because i used to do a ton of real-time streamed data work with Pi. So basically you might have a lot of data, but you are generally only operating on small pieces of it incrementally at any given time, so like, i can give you two real world examples where you could do some powerful stuff with a platform like XAI.  I had one client that makes huge pumps used in nuclear reactors.  They capture real time streamed data from these pumps over wireless mesh networks and funnel all of that to some monitoring servers where they then do manual analysis.  What if you were streaming that data into a predictive analytic application that could use fuzzy logic to make preventative maintenance recommendations before you had failures or degradation?

Idea for some extra reading:

If you're interested in some light reading on some of the tech concepts behind the sapience design, some of these papers are good reads, this one in particular: http://users.monash.edu.au/~asadk/paper/full_text/A%20Multi%20Feature%20Pattern%20Recognition%20for%20P2P-based%20System%20Using%20In-network%20Associative%20Memory%20-%2085th%20Solomonoff%20Memorial%20Melbourne%20Australia%2030Nov%2002Dec%202011.pdf

http://www-ai.cs.uni-dortmund.de/LEHRE/SEMINARE/SS09/AKTARBEITENDESDM/LITERATUR/sam08_decisiontree_bhaduriKargupta.pdf


One of the Decentralized AI endgames:

I could imagine a year or two from now XAI becoming essentially like a decentralized Azure/AWS.



11  Alternate cryptocurrencies / Announcements (Altcoins) / Re: $XAI Sapience AIFX - Decentralized AI over coin network | Now on Bittrex, EmpoEx on: January 29, 2015, 05:23:43 PM
Get to know Joe Mozelesky, lead developer of Sapience:

The following a selection of various info that is readily available and was provided by Joe throughout January 2015 related to Sapience development.


Potential End User Case Scenario:

Pattern analysis and fraud detection algorithms, that is the kind of thing that you could run on XAI.

The other real world application i can think of for this stuff... I used to do a lot of work on power grids.  I could see real world use cases for both capacity need prediction, preventative maintenance predictive analytics for generation owners.  I did some work for a company that operates huge wind farms...there's another application - predicting supply based on weather forecasts combined with turbine efficiency etc.

I could see stuff where you have applications built on StreamInsight or oracle's CEP engine where there is an adapter plugged in to simultaneously  send the data over to the XAI network to keep feeding it real time streamed data to run continuous analytics on like financial trading engines, etc.

Then there's more mundane stuff... like how about building a feed from your accounting system to send inventory data/sales data into it, and using AI to make re-order recommendations for purchasing, depending on your business you might combine that with weather data, data from futures/commodities markets, new housing starts, etc.

Supply chain stuff, like having an AI that is analyzing news feeds and can predict shortages of particular raw materials and then send you alerts saying hey potentially in 2 months your supplier A isn't going to be able to make quota because of a shortage of XYZ material, start bringing another factory online or substitute a different raw material.

There's all kinds of applications where traditionally a mid-market company might be using desktop BI (business intelligence) software and basic rules engines to do basic analytics, simply because they don't have access to more powerful tech.

Accessibility Ideas:

What would be really cool is if somebody manages to miniaturize a pico projector to fit in a Android Wear smartwatch...so you could say something like "Sapience Visualize" and it projects ai network activity map onto the wall etc. I have to look into how hard it is to do android notifications from QT...would be pretty cool in addition to being able to say AI in your pocket - AI on your wrist.  Those wearables consume notifications over bluetooth.

Related to Android QT development:

Only thing I can think to try is rebuilding it with an older toolchain... i built it with 4.9, but maybe will try 4.8.  It seems to be a memory access violation when its trying to invoke a method, so that makes me think it is the linker issue, apparently there is a bug where the linker does not order the loads correctly, although when watching the LogCat it did load gnustl first...looks like the call table must be wrong/off.  Its trying to invoke uin256 constructor and bombing out with the access violation, so it must have the wrong address for it. To build the apk you have to build the dependencies using the same android toolchain... that requires jumping through various hoops.  i built most of them manually but to try again i'd probably try to build them all using the ndk-build build system in the android ndk e.g. boost, openssl, miniupnpc, bdb.
mixed and matched stlport and gnustl in leveldb. I rebuilt the dependencies using android ndk standalone toolchain and made sure all of them referenced gnustl and used the standalone toolchain deployed using the shell script in the ndk, as opposed to trying to build it with all the paths set into the ndk itself, also used the latest leveldb directly from github.


Leveldb pulled out of .pro file and compiled as a separate dependency.

GitHub
CedricProfit/SapienceAndroid
Contribute to SapienceAndroid development by creating an account on GitHub.
https://github.com/CedricProfit/SapienceAndroid

Talking Android Wallet v1.0 Ready:

https://play.google.com/store/apps/details?id=com.blockchainsingularity.apps.sapience_qt


Quotient PoS:

I added some auto-optimization of block sizes, so there is a new setting on the profit explorer tab where you can input your preferred "block" size and when staking it will split your blocks down if they are bigger. It won't directly divide it up into chunks at your block size - I originally coded that and it breaks all of the PoS checks-and-balances, the PoS stuff all assumes a max of 2 outputs so it will split in 2, and combine in a way that seeks your preferred size. Its nice anyway, you can just set it to 367 or whatever the magic number is, and forget about it/let it run.

I've been working on some other code to allow you to break down your blocks all at once, originally i was going to have this happen automatically on a thread but i realized this will saturate the capacity of the network if 1,000 wallets are all generating 100 tps and from the bitcoin debate with gavin we know the theoretical maximum is 7tps network wide on bitcoin....  we have 1/10th the block time so theoretically we can handle 70tps network wide.
So i need to think of what the best way is to do it, either throttle it so it only does so many per minute, or have a button...or just leave it and let people do it manually in coin control leaving it alone might be fine, as the staking mechanism will slowly optimize it over time anyway i've been running it this week on my wallet just to make sure everything is ok and its been working good for me.


Thinking Ahead XAI:

One of the things i have been thinking about solutions for is dealing with "bad" or unwanted data on the network... so the idea is that on Sapience you can load private data just for running your own algorithm scripts on, but also be able to mark some data sets as public and share them... the only issue with that is people who decide to "troll" the network and load crap onto it... i was thinking a simple solution at first is to provide an interface so on your node you can just ignore/reject data you don't want to host parts of.
As far as specific algos...what I am working towards is that the sapience platform will work sort of like pine scripts in tradingview, where there is a set of generalized services and then specific algos or operations are scripted via a kind of markup... so this way it is adaptable/not limited by what is hardcoded.

What i am trying to get to is something that is sort of like a cross between AIML and opencog etc.

Performance with latency/etc. may present some challenges, preventing things from stalling out, being resilient if there are long running operations across multiple nodes and one or a couple of them go down.

Operating Costs:

Micro fees...  sort of like ethereum where there is a cost associated with each operation within a "contract"...  what i am thinking of is that a "run" would get associated with a transaction and signed by the submitter, and the transaction amount will have to be enough to cover the sum of the operation fees. we will need to think about what fees make sense for both data handling and computation time/operations. If you are running analytics off a real time feed from your e-commerce website for instance, you might want to feed a stream and only need each "unit" of data to have a lifetime of a few minutes, vs. loading some trade data for backtesting where you might want to store it for a couple days or longer.


Thoughts about how Sapience will pan out and believing in his project:

I really believe this is going to be a game changing platform...because right now your only real alternative options are Microsoft and Google, at high cost... there's a handful of startups but they aren't live yet.


Is XAI like SkyNet?

Heh, that thing isn't even in the same ballpark as to what xai is putting together...i was looking at it and the token calls itself decentralized, and yet its a combination of a local server running on the end user environment + their proprietary centralized "cluster"/db of data. Its interesting for what it is but its not really solving the same problem.

Plagiarism Concerns:

My biggest fear when i start publishing the data architecture design etc. on the wiki is that its going to get ripped off or show up regurgitated somewhere... i guess there's not much that can be done about that though other than just build a better product.

Data:

The data platform for Sapience is pretty generalized, so although I'm building it specifically to support back-ending the AI stuff, in practical application the XAI platform could be leveraged for all kinds of stuff down the road, it all comes down to writing adapters basically... maybe another analogy is its like object oriented programming, or component-based development.... As far as data throughput... right now the way the existing low level stack in the bitcoin p2p code works, i would say expect something closer to DSL
but there is a lot of room for improvement, the existing stack is pretty basic and has arbitrary caps/throttling in it... Thats why i was saying in the last video, we can circle back around to do optimization later...because there is plenty of room for it.

For AI work, the speed cap isn't too much of an issue, because essentially we are dealing with operating on streams anyway.

For me it is easy to think about it that way, because i used to do a ton of real-time streamed data work with Pi. So basically you might have a lot of data, but you are generally only operating on small pieces of it incrementally at any given time, so like, i can give you two real world examples where you could do some powerful stuff with a platform like XAI.  I had one client that makes huge pumps used in nuclear reactors.  They capture real time streamed data from these pumps over wireless mesh networks and funnel all of that to some monitoring servers where they then do manual analysis.  What if you were streaming that data into a predictive analytic application that could use fuzzy logic to make preventative maintenance recommendations before you had failures or degradation?

Idea for some extra reading:

If you're interested in some light reading on some of the tech concepts behind the sapience design, some of these papers are good reads, this one in particular: http://users.monash.edu.au/~asadk/paper/full_text/A%20Multi%20Feature%20Pattern%20Recognition%20for%20P2P-based%20System%20Using%20In-network%20Associative%20Memory%20-%2085th%20Solomonoff%20Memorial%20Melbourne%20Australia%2030Nov%2002Dec%202011.pdf

http://www-ai.cs.uni-dortmund.de/LEHRE/SEMINARE/SS09/AKTARBEITENDESDM/LITERATUR/sam08_decisiontree_bhaduriKargupta.pdf


One of the Decentralized AI endgames:

I could imagine a year or two from now XAI becoming essentially like a decentralized Azure/AWS.


12  Alternate cryptocurrencies / Announcements (Altcoins) / Re: $XAI Sapience AIFX - Decentralized AI over coin network | Now on Bittrex, EmpoEx on: January 28, 2015, 04:27:08 PM
Hello Sapience Community

My aim is to present various XAI information right here on this forum and perhaps on a blog at a later date for all things related to Sapience.... I'll be using dfx.io / twitter / slack / wiki / media articles etc as the sources.

There is a lot of information out there but new people to this thread might miss 95% of it.... not only that but... it'll be good to have points of reference as the technology matures.

Before starting I'd like to confirm with Joe (the developer in charge of this project) if he's ok with this.

Pages: [1]
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!