Bitcoin Forum
November 02, 2024, 07:04:43 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 [144] 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 ... 443 »
  Print  
Author Topic: [DVC]DevCoin - Official Thread - Moderated  (Read 1058866 times)
This is a self-moderated topic. If you do not want to be moderated by the person who started this topic, create a new topic.
markm
Legendary
*
Offline Offline

Activity: 2996
Merit: 1121



View Profile WWW
December 23, 2013, 01:26:42 AM
Last edit: December 23, 2013, 02:46:37 AM by markm
 #2861

It depends if we are talking about music scores or music tracks.  For tracks it will take some time to listen, but that's potentially do-able.  For scores as you say it's impossible to judge the quality without listening to it being performed/recorded.  I'm not sure how scores can be accommodated.

Scores can be considered writing, in a hieroglyphic language whose characters include things like quavers and semiquavers (demiquavers?) and treble-clefs and such.

It is maybe just another written language.

As for what it sounds like, you should be able to simply press play, or modify which instruments to use and press play, or select a start bar or note and an end bar or note and press play, etc. And like a wiki, even be able to edit it so over time the community of authors uh I mean composers can settle upon which edit to leave as the main one that visitors see/hear when they vivist the page since all previous states of the composition will also be available to them.

-MarkM-

Browser-launched Crossfire client now online (select CrossCiv server for Galactic  Milieu)
Free website hosting with PHP, MySQL etc: http://hosting.knotwork.com/
markm
Legendary
*
Offline Offline

Activity: 2996
Merit: 1121



View Profile WWW
December 23, 2013, 01:33:09 AM
 #2862

For writing, think about fonts.

We do not care what font is used at display time to display an article, we are free to use any font we choose to use.

Oh I absolutely care. There are classical pieces that sound divine when played with a harpsichord, but sound terribly inappropriate when played with a piano.  Smiley

To get a free open source Mona Lisa we would need to discover what brushes and paints were used, how, and in what sequence, to produce that painting.

We would then be free to see what the Mona Lisa would have looked like if it were performed using different brushes, different paints, different sequences.

You assume that every creation process is quantizable into things like strokes, movements, pressure points, whatever... this sure is true for code or text.

But much like some music instruments can't be controlled and reproduced by MIDI, many artforms have myriads of such miniscule subprocesses (the artist is often not even aware of) that you would need hypothetical scifi devices to be able to "catch" what happens during the creation process. (Earlier I heard you mention star trek replicators replicating a violin. I don't think wishful thinking about future developments will help us make good decisions about devcoins present.)


I saw a site just recently where a youth orchestra (landphilharmonic, I think) uses intruments built from scrap found in landfills.

Duplicating all those instruments would indeed be hard. But you seem to jump from that to it being impossible or improbable to 3-D print a violin or to code a violin-sounds-synthesiser. To me that landphilharmonic showed much the opposite from it being hard to emulate instruments, to the contrary it seemed to indicate that you don't even need a 3-D printer, perfectly use-able instruments can be created even out of crap found in landfills, no need for special and possibly expensive 3D-printer-ink!

But nonetheless 3D printer code for creating all standard and umpteen non-standard instruments is something we should try to have.

And robotic arms for bending metal and working wood etc should be able eventually to use landfill materials too, they just would need a feedback process of some kind letting them try the tone, adjust the object, try the tone etc, "tuning" it until it sounds as good or almost as good as the ones the landphilharmonic uses.

Also plans and instructions and guides for humans on how to find suitable things in landfills and how best to adapt them for musical use would also be good to have.

-MarkM-

Browser-launched Crossfire client now online (select CrossCiv server for Galactic  Milieu)
Free website hosting with PHP, MySQL etc: http://hosting.knotwork.com/
markm
Legendary
*
Offline Offline

Activity: 2996
Merit: 1121



View Profile WWW
December 23, 2013, 01:38:00 AM
Last edit: December 23, 2013, 02:47:18 AM by markm
 #2863

For writing, think about fonts.

We do not care what font is used at display time to display an article, we are free to use any font we choose to use.

Oh I absolutely care. There are classical pieces that sound divine when played with a harpsichord, but sound terribly inappropriate when played with a piano.  Smiley

So include with the score a hint saying "many people find it sounds best when played on a harpsichord; in particular using a piano is deprecated by some (reference provided, of course..." Smiley)

-MarkM-

Browser-launched Crossfire client now online (select CrossCiv server for Galactic  Milieu)
Free website hosting with PHP, MySQL etc: http://hosting.knotwork.com/
georgem
Legendary
*
Offline Offline

Activity: 1484
Merit: 1007


spreadcoin.info


View Profile WWW
December 23, 2013, 01:39:39 AM
 #2864


Supposedly a lot of "real musicans" could "hear" a piece by reading the score.

So it might be more effective use of people's time to have such musicians check the scores first before even bothering to impose some particular performance of a score using some particular voices and/or instruments and/or sound-effects upon the ears of people who cannot random-access the thing but must instead proceed serially through it, and maybe are not even able to listen to it in fast-forward to get a quick grok of it before delving down into nanosecond by nanosecond or second by second or minute by minute laborious executions of the score.

If the piece is good, but could be played in a way that would not sound so good, maybe the reviewers could also provide helpful hints such as approximately what range of instruments it should not sound too awful on, what kinds of speeds one could execute it at without losing its "artistic flavour" or "emotional appeal" and so on.

Like for example "assuming you'd normally play it on a 33-RPM turntable, this piece would make reasonable elevator music, but at 45-RPM it is much more stimulating, possibly not useful as background music, and at 78-RPM it would probably be more useful for cartoon soundtracks than for romantic background music for a candlelight dinner; By the way if you turn up the drum track and use this type of drum, you might find it affects more people in thus and such a way, whereas if you substitute piccolos for the oboes you might find it tends rather to suit X type of game-scenario" and so on.

Or "best suited for playing using husky female vocalist-instrument, if you go with a bass male voice you might also want to adjust this and that instrument in that and that way".

Or "recommended to be played in a minor key, however it also sounds very nice when played in the key of E flat" etc.

-MarkM-


Listen markm, I love the discussion we are having here. And appreciate all the thought experiments you provide.

If this was a forum about artificial intelligence, I would be delighted with the discussion we are having.

Some of the examples you give, like changing the speed of music playback has been possible for 50 years now, and is not really a novum.

On the other hand, you propose non-existent very advanced voice synthesizers and talk about them as if they would already exist, which does not help our situation in the here and now.

As I said before, I agree that for musical content it could be considered open source if the musician who created the music simply talks openly about the process of his music creation.
How did he do it, what tools did he use, in what way etc...

But that should be it. Let people first and foremost provide music for free (and get compensated in DVC), so other people can use that music for free in their projects.

georgem
Legendary
*
Offline Offline

Activity: 1484
Merit: 1007


spreadcoin.info


View Profile WWW
December 23, 2013, 01:40:19 AM
 #2865

For writing, think about fonts.

We do not care what font is used at display time to display an article, we are free to use any font we choose to use.

Oh I absolutely care. There are classical pieces that sound divine when played with a harpsichord, but sound terribly inappropriate when played with a piano.  Smiley

So include with the score a hint saying "many people find it sounds best when played on a harpsichord; in particular using a piano is deprecated by some (reference provided, of course... Smiley)

-MarkM-


Excellent. That's exactly what I would do. So we agree on this.

Go on...

markm
Legendary
*
Offline Offline

Activity: 2996
Merit: 1121



View Profile WWW
December 23, 2013, 01:46:06 AM
 #2866


Please watch Star Trek...

I watch star trek all the time.

I like scifi, but don't let it influence my judgment when it comes to science, especially physics.

 Grin

On the television show they use human actors to play the parts of holograms, that is merely another case of being able to switch the instruments. We should equally well be able to have a different bunch of actors play those same roles from that same script, even aliens ought to be able to enact it.

Sure right now we have to laboriously have humans enact plays and scripts and screenplays and such, but that is merely an implementation-detail.

Right now we use Battle for Wesnoth to author "holonovels" and "holodramas" and "holodocumentaries" because we have no holodecks yet. We are stuck looking at two-dimensional representations of the action and choices and characters. Those Battle for Wesnoth scenarios though can still be the same scenarios come the day we have three-dimensional rendering options to allow them to be enacted/played/executed in 3D, and eventually in all-around-you 3D whether via goggles or full screen all around you.

The important thing is we have the actual code of the scenario, not just a movie showing what one player saw on their screen while they played the scenario.

So different players can play it differently, and different input-output devices can have it look different with players able to make their choices via different input devices.

-MarkM-

Browser-launched Crossfire client now online (select CrossCiv server for Galactic  Milieu)
Free website hosting with PHP, MySQL etc: http://hosting.knotwork.com/
georgem
Legendary
*
Offline Offline

Activity: 1484
Merit: 1007


spreadcoin.info


View Profile WWW
December 23, 2013, 02:01:10 AM
 #2867

I saw a site just recently where a youth orchestra (landphilharmonic, I think) uses intruments built from scrap found in landfills.

Duplicating all those instruments would indeed be hard. But you seem to jump from that to it being impossible or improbable to 3-D print a violin or to code a violin-sounds-synthesiser. To me that landphilharmonic showed much the opposite from it being hard to emulate instruments, to the contrary it seemed to indicate that you don't even need a 3-D printer, perfectly use-able instruments can be created even out of crap found in landfills, no need for special and possibly expensive 3D-printer-ink!

But nonetheless 3D printer code for creating all standard and umpteen non-standard instruments is something we should try to have.

And robotic arms for bending metal and working wood etc should be able eventually to use landfill materials too, they just would need a feedback process of some kind letting them try the tone, adjust the object, try the tone etc, "tuning" it until it sounds as good or almost as good as the ones the landphilharmonic uses.

Also plans and instructions and guides for humans on how to find suitable things in landfills and how best to adapt them for musical use would also be good to have.

-MarkM-


Stuff found on a landfill should better be industrially cleaned and processed before exposing young children to the toxic waste (heavy metals etc) that is part of pretty much every metal or electronic waste that lands on a landfill.

The situation you describe reflects the poor situation of poor kids in a poor country.
It's not even their waste. We ourselves are the real creators of those third worlds landfills. It's our stuff we threw away.

I am sorry, I don't understand the connection you try to make with this example and 3-D printers.

I admire poor people who make the best out of even the shittiest situation. We can learn something from them.

markm
Legendary
*
Offline Offline

Activity: 2996
Merit: 1121



View Profile WWW
December 23, 2013, 02:45:37 AM
Last edit: December 23, 2013, 03:05:02 AM by markm
 #2868


The equivalent of what you're describing for writing would be an AI that can output readable and uniquely stylistic prose, but that's not what Devtome asks for.

It relies on human generated content and the ability to share and remix that content. How are people going to crawl into my closed source brain to see how I think of topics, conceptualize storylines, and string sentences together? Would there be a category that said, "I want a low-sci-fi voice tagged first-person and dark humor vs an academic voice tagged archaic english lexicon?" (Definitely an interesting concept that I wouldn't have thought of without this discussion). It seems more practical to put the writing up, and also put explain the writing process if there's enough interest for it.


Perfect example. Thank you very much!

But hey, after this discussion I would conclude that this is exactly what devtome secretly wants to achieve: to substitute the writer with an algorithm.


I think the CC BY-SA license and opensource are highly compatible concepts, but they're not strictly the same thing. There might be a demand for open-source voice synthesizers and that is a very different project than human generated content that is not locked away by copyright. Both are valid, and the degree of their implementation will depend on the demand.

I agree.

Hey I like artificial intelligence. I would love nothing more than to one day have conversations with an android like data.

Devcoin should absolutely have a big section about artificial intelligence. But it shouldn't interfere or impose on the human intelligence of any participant that wants to contribute.



In a way, kind of.

First off, an illustrator of some kind maybe, that can, given a novel, try to illustrate it.

That way eventually maybe instead of spending millions of devcoins hiring human actors to act out a plot, each person will be free to let their own computer illustrate/animate it for them using their own preferences as to what goblins look like, how a dark and storm night looks, whether a dark and stormy night seems to them to be better illustrated with a musical score or just storm sound-effects or both, and so on and so on.

Right now when we get an animated illustration of a famous novel or book (e.g. the Bible, or a Dickens novel, or Lord of the Rings etc) we get just a particular artists' or director's or film production crew's interpretation of the novel or book as cast in moving images or enacted as one particlar play or screenplay more or less "true to the original".

There is massive need for the ability to automatically illustrate things; nowadays millions of devcoins worth of wealth goes into making just one visual representation of what happens to happen in a game someone plays, and this acts as a massive "moat" or barrier standing in the way of game development.

The plot and mechanics of a play or game or event or performance are in many cases the important part. People still read novels even though movies have been made of them. Partly this might even be due to the failure of many movie versions of a novel to faithfully represent the actual novel. They tend not to actually enact the novel graphically but in fact to chop up the plot, change it around even, alter the gender of some characters maybe, all kinds of changes. They seldom actually just directly display what the actual events described in the novel, in the order they are described, might actually look like.

There are lots of games out there that only use text to describe events and characters and settings and objects, and part of why is that that is the most basic and inexpensive way of representing the situations and events and settings. To create a graphical client that would illustrate such games would be hard, but it would save billions of cost compared to hiring a movie director and crew of actors to enact each and every possible permutation of states such a game could be in.

It is also hard to go in reverse: to have actual 3D models of which you only get to see a 2D view, and from what you see on the 2D screen figure out what objects are supposedly there, how much damage which weapon did to what and stuff like that.

It is two very different approaches, in one approach you have a state of affairs and it can be depicted in various ways, in text or with various artists' graphical impressions or with various attempts at 3D clients that attempt to put together some kind of illustration that accurately and concisely and conveniently represents to players what the actual state of affairs happens to be. In another approach you just get to see images of what some artist thought such a state of affairs might look like, which can make it very hard to actually compute exactly what state of affairs it is that the artist is trying to convey.

Ultimately yes it would be nice to have a narrator program that can look at the actions of 3D models and deduce what they are doing what is happening what state of affairs they are depicting and thus be able to describe in words what is visible and what it means in terms of a state of affairs.

But when we post to the English-language devtome typically most of the words we use are available in dictionaries, those that are not are often true nouns; the point is all those words are free open source words, not copyright photographs of words so that other people cannot use the same words in their compositions; and furthermore the words are font-independent.  We don't have to buy a library of letter-sequences or a patented word-sequencer to use them.

I agree that right now it is hard to find a free open source model of each and every object depicted in arbitrary photographs or a free open source model of each and every instrument that a musical score calls for.

But we should bear that in mind at all times, so as to try to avoid using photographs featuring objects we lack free open source models of for example, instead trying to first get free open source photos (eventually actual models) of each of the objects that are included in the photo so that eventually we can compose the photo.

Devcoin is supposed to be about development, about developing things, free open source things.

So it should focus more on how to develop music or images than on merely trying to fill storage space with some tiny sample of all the possible images and music that can be constructed given the components from which music and images are developed.

Yes initially we need to "cheat", for example by having 2D images of goblins orcs motorcars ships shells sealing-wax or any other objects that the composer or designer of a scenario or situation or plotline or holonovel might want or need to incorporate into their creation. But we should try to keep in mind at all times that that is a cheat, that ultimately we want 3D (or more: incuding dimensions of range of actions and reactions would be nice too for example) models of everything so that we can construct new 2D images on-the-fly depicting things from any angle of view, and de-construct 2D images into what are they an image of and from what angle so instead of trillions upon trillions of 2D images covering every possible angle and situation we can compress it down to it is these things situated thus and so, as seen from this angle under this type of lighting.

There is still massive scope for artists, and a lot of their work can be made much easier and more efficient. Instead of having to spend all day drawing or painting one frame at a time of a cartoon they will be able to simply describe what it is that the cartoon is to depict and have 2D-view frames of those things and/or characters performing those activities at thus and such a frame-rate.

I do understand the concerns about artistic creativity but please try to also understand that a lot of artists do a lot of drudge-work / gruntwork that seems to them horribly un-creative, full time jobs creating "creative" (visuals etc), work that does not seem "creative" at all to them. Sometimes they even complain that such work dulls their creativity. (Citation needed?)

Often some lead artist or director or game-designer for example dictates exactly how everything is to look, the lead artist even sketches, maybe even fully fleshes out, one or more samples so the drudge-work guys see what style/mood/feel/theme they are to imitate, and the bulk of the "artists" then get to spend day after day churning out all the different view angles of the objects, all the different lighting conditions the characters might be seen in and on and on like that, total drudgery.

Just recently I saw a tool to help artists with that drudge-work, it let you automatically generate pretty good "under different lighting conditions" tiles for a 2D platform-type game from just a few renders, instead of having to manually render all the combinations / permutations. It was amazing, give it a flat 2D image and a few other 2D things and presto it generates for you a whole range of "it looks textured aka not 2D" versions for different lighting-situations. Amazing.

Too much of what artists do (in 9-5 jobs, for example) is very far from creative in their own eyes, and having artists do it is very expensive. So if we can make a tool that will illustrate a novel or plot or in-game situation without having to force artists to spend endless hours doing drudge-work that would be awesome.

It would still leave tons of room for creative art though. Just because your "make a movie of any novel any time you like" software comes with a bunch of off the shelf models of objects-found-in-novels with which to illustrate novels in no way means that an arist who makes a different set of objects the same software can use will not be able to find buyers; quite likely many people will be willing to buy object-sets that they find more pleasing to their eyes than any one set of objects already out there.

Look at Second Life, in Second Life you can edit your avatar, but people still hire artists to manually and painstakingly make a whole new different avatar or skin depicting the player.

So I think this kind of automation might even increase the market for custom hand-made artwork, since once anyone can have a model representing them or their house or whatever just by telling the computer various instructions like "give me bigger ears... darker skin... quiver over my left shoulder... gold ring on my left ring-finger..." etc, there will probably be people who will still want hand-crafted ones if even just to be able to say "ha ha my avatar is better than yours because good luck describing mine and having the default avatar-building software duplicate it without outright copying the hand-crafted skin that I am wearing".

But y'see for free open source we wouldn't want them to be wearing a hand-crafted skin that isn't free open source, because we want to be able to depict their character freely on other servers, take a copy home and modify it as we wish and so on and so on.

The big thing I suspect is the thinking in terms of composing from components. The actual skin and the actual frame over which to put the skin is better than just a bunch of 2D images of the avatar as seen from various angles.

So we should try to have models of all the things shown in a photograph in preference to the photograph itself, the models and skins for the creatures instead of just single views of creatures as seen from various angles and so on.

-MarkM-

Browser-launched Crossfire client now online (select CrossCiv server for Galactic  Milieu)
Free website hosting with PHP, MySQL etc: http://hosting.knotwork.com/
cyke64
Newbie
*
Offline Offline

Activity: 15
Merit: 0


View Profile
December 23, 2013, 03:42:29 AM
 #2869

Testing 0.8.5 devcoin windows client (https://github.com/sidhujag/devcoin/tree/master/dist/Windows32)

Today Sidhujag has updated the devcoin windows client from 1.0.0 to 1.0.1 (Devcoin-qt_V1.0.1.zip).
He has added dvcstable06/dvcstable07 to dns seed nodes.
Strangely you must choose the RAW button if you want download the zip file  Embarrassed
Extract the zip file in any folder and execute it.
After launch go to Help then debug window for viewing block chain counting  Smiley
Everything is perfectly running on Windows 8.
sidhujag
Legendary
*
Offline Offline

Activity: 2044
Merit: 1005


View Profile
December 23, 2013, 04:11:19 AM
 #2870

Y not create more bounties the new wallet needs testing? Any immediate work? The new pr work im doing is going to be good shit I suggest it be worth more than 12 shares its going
to wider audience and I will adhere to higher standards ( up to quality admins to judge before release)

I would like to propose a bounty for qt images and new icons.  8 shares for the images and icons used in the qt, 4 shares for the second best set of icons.

Any objections, or should anything be changed??

I went ahead and did this...uploading and updating the update thread soon. v1.0.2 will have the new images/icons... testnet and normal included aswell as images to be used in the installer (nsis banner and sidebar image)

Also the installer, there is builtin support in the source code repo for an NSIS installer, and I think it may be tied to the build release system, which would automate the installer with correct version information. This is the preferred method to create an installer instead of manually. I propose we use this installer. I will try to make an install with the new client to show you guys.
ranlo
Legendary
*
Offline Offline

Activity: 1988
Merit: 1007



View Profile
December 23, 2013, 04:47:06 AM
 #2871

Y not create more bounties the new wallet needs testing? Any immediate work? The new pr work im doing is going to be good shit I suggest it be worth more than 12 shares its going
to wider audience and I will adhere to higher standards ( up to quality admins to judge before release)

I would like to propose a bounty for qt images and new icons.  8 shares for the images and icons used in the qt, 4 shares for the second best set of icons.

Any objections, or should anything be changed??

I went ahead and did this...uploading and updating the update thread soon. v1.0.2 will have the new images/icons... testnet and normal included aswell as images to be used in the installer (nsis banner and sidebar image)

Also the installer, there is builtin support in the source code repo for an NSIS installer, and I think it may be tied to the build release system, which would automate the installer with correct version information. This is the preferred method to create an installer instead of manually. I propose we use this installer. I will try to make an install with the new client to show you guys.

I just ran the new 0.8.5 version and it's awesome! I love this so much more than the other one; and it takes about a second from running it before it's updating the blockchain (the old one takes 3-5 minutes to start). Huge +1 from me!

https://nanogames.io/i-bctalk-n/
Message for info on how to get kickbacks on sites like Nano (above) and CryptoPlay!
markm
Legendary
*
Offline Offline

Activity: 2996
Merit: 1121



View Profile WWW
December 23, 2013, 04:51:44 AM
 #2872

I'm kind of confused by why this project would exclude a musical equivalent of Devtome. The equivalent of what you're describing for writing would be an AI that can output readable and uniquely stylistic prose, but that's not what Devtome asks for. It relies on human generated content and the ability to share and remix that content. How are people going to crawl into my closed source brain to see how I think of topics, conceptualize storylines, and string sentences together? Would there be a category that said, "I want a low-sci-fi voice tagged first-person and dark humor vs an academic voice tagged archaic english lexicon?" (Definitely an interesting concept that I wouldn't have thought of without this discussion). It seems more practical to put the writing up, and also put explain the writing process if there's enough interest for it.

Well at the very least, remember that in a wiki anyone can edit anything.

Devtome articles can be spelling-corrected, grammar-corrected and so on by anyone.

So at the very least the same should be the case with a devtome-equivalent for music or images or movies or 3D models or whatever.

So for example if someone posts an item of pixel-art purporting to be an image of a certain object as seen under a certain light, and they maybe didn't get quite the right hue or shade or tint or whatever on a certain pixel, someone else, maybe someone who has the actual model and the actual light-source the image supposedly depicts, could correct that pixel.

The scope for auto-generated "spins" and "spam" and "drivel" seems to me potentially massively higher in imagery and soundtracks because it is so very easy to generate trillions upon trillions of images, as compared to generating trillions upon trillions of text articles that are grammatically correct and actually seem to have something to convey.

(For sound for example one could set oodles of noisemaking models moving around making noises, maybe in reaction to each other, or you could run Battle for Wesnoth with sound and record the soudns of a massive battle, and by varying which units you deploy you'd get different soundtracks, so you could make a track of elves versus goblins, another goblins versus loyalists, and so on and so on and so on.)

Though admittedly maybe I just have not been following the development of constructive grammars, article spinners and suchlike spammer-tools closely enough lately. Is it still the case that when you generate trillions of articles using a constructive grammar the resulting articles seem to somehow lack internal sense and consistency and such?

For imagery one could fairly easily zoom cameras around OpenSimulator environments, having scripted objects walking around, random placement of trees and shrubs and buildings and so on and so on and generate insanely huge numbers of images, and each would have an internal sense and consistency because each is simply one possible angle of view of one possible configuration of three dimensional models.

I think the CC BY-SA license and opensource are highly compatible concepts, but they're not strictly the same thing. There might be a demand for open-source voice synthesizers and that is a very different project than human generated content that is not locked away by copyright. Both are valid, and the degree of their implementation will depend on the demand.

Well we already have, in the software development side of things, a distinction between "any old crap you choose to come up with" and "stuff we actually need".

So maybe we could do the same with other media?

Actually it is already maybe not only in programming, but in "being a developer of free open source stuff" in general.

In general you have to be a person who works at least ten hours per week on free open source stuff in order to qualify as a developer to get onto the receivers list.

(That is, in order to get one "share".)

I am not at all convinced that it takes forty hours to write 1000 words for Devtome, which is why Devtome author pay seems out of scale with everything else.

But, also in general, if what it is that you work on in the way of free open source stuff happens to be something we really need, such as bitcoin, or Open Transactions, then you only have to be a person who spends at least ten hours per month working on such stuff.

So I would imagine that at a bare minimum random images or sounds or music that someone feels like making should pay no more than 1/4 as much as images and music that are specifically required.

For example if it is decided that the devtome site or the devcoin site or whatever needs a soundtrack, maybe because websites without sound earn less money, attract less visitors and so on, then presumably making such soundtracks ought to pay at least 4 times as much as just submitting random tracks just to get your pay per byte or pay per run-length minute or whatever a devtome-like site for music would use as a metric in calculating pay.

If it does become necessary to have a soundtrack for Devtome, then maybe it would turn out to make sense to have a distinct separate soundtrack for each article, based on the contents of the article and maybe also carrying on the general musical theme that relates all the tracks of all the articles together so on hearing one you can guess it is probably the soundtrack of a Devtome article and maybe - maybe even "hopefully" - also what category of article it is the soundtrack for...

Battle for Wesnoth needs soundtracks for scenarios, and maybe also grouped soundtracks, so that a campaign can carry a theme throughout a whole bunch of scenarios with the track reflecting the mood of the individual scenario as well as the overall theme all the tracks of all the scenarios in the campaign have in common that relates them all together. If Battle for Wesnoth becomes a mission-critical component of the overall devcoin vision / roadmap, then presumably making soundtracks for those campaigns and scenarios that are needed for Devcoin's purposes ought, again, pay four times as much as just random stuff that was not designed specifically to fill a particular need that the Devcoin project has.

I do think that Devtome author pay is probably way out of scale with everything else, and I still think that should be corrected.

In fact it seems to me that ideally we should at some point no longer need to pay authors by the word, because, hopefully, we will eventually be able to do authors the same way we do any other developers of free open source software, which is to say, if we find a good author who habitually as a lifestyle spends ten hours per week creating free open source stuff they should be able to get onto the receivers list as a developer of free open source stuff.

Notice that they get the same one share regardless of whether they only spend the absolute minimum - ten hours per week - working on such stuff or they do such stuff 40 hours a week or 60 hours a week or 80 hours a week or whatever.

The idea was we are looking for those people who already naturally as a lifestyle contribute their time freely to free open source development.

We seem to have gotten sidetracked from that, with Devtome suddenly we started trying to bribe people to develop free open source writings or to release their existing writings as free open source, and in fact I do not even recall our having even tried to go out and find authors who already have been freely contributing at least ten hours of writing per week to free open source projects...

-MarkM-

Browser-launched Crossfire client now online (select CrossCiv server for Galactic  Milieu)
Free website hosting with PHP, MySQL etc: http://hosting.knotwork.com/
emfox
Full Member
***
Offline Offline

Activity: 276
Merit: 102


View Profile
December 23, 2013, 05:00:52 AM
 #2873

Y not create more bounties the new wallet needs testing? Any immediate work? The new pr work im doing is going to be good shit I suggest it be worth more than 12 shares its going
to wider audience and I will adhere to higher standards ( up to quality admins to judge before release)

I would like to propose a bounty for qt images and new icons.  8 shares for the images and icons used in the qt, 4 shares for the second best set of icons.

Any objections, or should anything be changed??

I went ahead and did this...uploading and updating the update thread soon. v1.0.2 will have the new images/icons... testnet and normal included aswell as images to be used in the installer (nsis banner and sidebar image)

Also the installer, there is builtin support in the source code repo for an NSIS installer, and I think it may be tied to the build release system, which would automate the installer with correct version information. This is the preferred method to create an installer instead of manually. I propose we use this installer. I will try to make an install with the new client to show you guys.

I don't know the windows build or release process, but as linux, bitcoin 0.8.6 has intergrate automake system, which is a great step up for building system. if not too hard, could we just update it 0.8.6?

Earn Devcoins by Writing
BTC: 1Emfox1WswYcd2YucUskRzqfRWKkcm1Jut DVC: 1Emfox1WswYcd2YucUskRzqfRWKkcm1Jut
IXC: xnRKo3qSDdcPJ4pgTLER3orkquUVQXeLwf
georgem
Legendary
*
Offline Offline

Activity: 1484
Merit: 1007


spreadcoin.info


View Profile WWW
December 23, 2013, 05:06:03 AM
 #2874


In a way, kind of.

First off, an illustrator of some kind maybe, that can, given a novel, try to illustrate it.

That way eventually maybe instead of spending millions of devcoins hiring human actors to act out a plot, each person will be free to let their own computer illustrate/animate it for them using their own preferences as to what goblins look like, how a dark and storm night looks, whether a dark and stormy night seems to them to be better illustrated with a musical score or just storm sound-effects or both, and so on and so on.

Right now when we get an animated illustration of a famous novel or book (e.g. the Bible, or a Dickens novel, or Lord of the Rings etc) we get just a particular artists' or director's or film production crew's interpretation of the novel or book as cast in moving images or enacted as one particlar play or screenplay more or less "true to the original".

There is massive need for the ability to automatically illustrate things; nowadays millions of devcoins worth of wealth goes into making just one visual representation of what happens to happen in a game someone plays, and this acts as a massive "moat" or barrier standing in the way of game development.

The plot and mechanics of a play or game or event or performance are in many cases the important part. People still read novels even though movies have been made of them. Partly this might even be due to the failure of many movie versions of a novel to faithfully represent the actual novel. They tend not to actually enact the novel graphically but in fact to chop up the plot, change it around even, alter the gender of some characters maybe, all kinds of changes. They seldom actually just directly display what the actual events described in the novel, in the order they are described, might actually look like.

There are lots of games out there that only use text to describe events and characters and settings and objects, and part of why is that that is the most basic and inexpensive way of representing the situations and events and settings. To create a graphical client that would illustrate such games would be hard, but it would save billions of cost compared to hiring a movie director and crew of actors to enact each and every possible permutation of states such a game could be in.

It is also hard to go in reverse: to have actual 3D models of which you only get to see a 2D view, and from what you see on the 2D screen figure out what objects are supposedly there, how much damage which weapon did to what and stuff like that.

It is two very different approaches, in one approach you have a state of affairs and it can be depicted in various ways, in text or with various artists' graphical impressions or with various attempts at 3D clients that attempt to put together some kind of illustration that accurately and concisely and conveniently represents to players what the actual state of affairs happens to be. In another approach you just get to see images of what some artist thought such a state of affairs might look like, which can make it very hard to actually compute exactly what state of affairs it is that the artist is trying to convey.

Ultimately yes it would be nice to have a narrator program that can look at the actions of 3D models and deduce what they are doing what is happening what state of affairs they are depicting and thus be able to describe in words what is visible and what it means in terms of a state of affairs.

But when we post to the English-language devtome typically most of the words we use are available in dictionaries, those that are not are often true nouns; the point is all those words are free open source words, not copyright photographs of words so that other people cannot use the same words in their compositions; and furthermore the words are font-independent.  We don't have to buy a library of letter-sequences or a patented word-sequencer to use them.

I agree that right now it is hard to find a free open source model of each and every object depicted in arbitrary photographs or a free open source model of each and every instrument that a musical score calls for.

But we should bear that in mind at all times, so as to try to avoid using photographs featuring objects we lack free open source models of for example, instead trying to first get free open source photos (eventually actual models) of each of the objects that are included in the photo so that eventually we can compose the photo.

Devcoin is supposed to be about development, about developing things, free open source things.

So it should focus more on how to develop music or images than on merely trying to fill storage space with some tiny sample of all the possible images and music that can be constructed given the components from which music and images are developed.

Yes initially we need to "cheat", for example by having 2D images of goblins orcs motorcars ships shells sealing-wax or any other objects that the composer or designer of a scenario or situation or plotline or holonovel might want or need to incorporate into their creation. But we should try to keep in mind at all times that that is a cheat, that ultimately we want 3D (or more: incuding dimensions of range of actions and reactions would be nice too for example) models of everything so that we can construct new 2D images on-the-fly depicting things from any angle of view, and de-construct 2D images into what are they an image of and from what angle so instead of trillions upon trillions of 2D images covering every possible angle and situation we can compress it down to it is these things situated thus and so, as seen from this angle under this type of lighting.

There is still massive scope for artists, and a lot of their work can be made much easier and more efficient. Instead of having to spend all day drawing or painting one frame at a time of a cartoon they will be able to simply describe what it is that the cartoon is to depict and have 2D-view frames of those things and/or characters performing those activities at thus and such a frame-rate.

I do understand the concerns about artistic creativity but please try to also understand that a lot of artists do a lot of drudge-work / gruntwork that seems to them horribly un-creative, full time jobs creating "creative" (visuals etc), work that does not seem "creative" at all to them. Sometimes they even complain that such work dulls their creativity. (Citation needed?)

Often some lead artist or director or game-designer for example dictates exactly how everything is to look, the lead artist even sketches, maybe even fully fleshes out, one or more samples so the drudge-work guys see what style/mood/feel/theme they are to imitate, and the bulk of the "artists" then get to spend day after day churning out all the different view angles of the objects, all the different lighting conditions the characters might be seen in and on and on like that, total drudgery.

Just recently I saw a tool to help artists with that drudge-work, it let you automatically generate pretty good "under different lighting conditions" tiles for a 2D platform-type game from just a few renders, instead of having to manually render all the combinations / permutations. It was amazing, give it a flat 2D image and a few other 2D things and presto it generates for you a whole range of "it looks textured aka not 2D" versions for different lighting-situations. Amazing.

Too much of what artists do (in 9-5 jobs, for example) is very far from creative in their own eyes, and having artists do it is very expensive. So if we can make a tool that will illustrate a novel or plot or in-game situation without having to force artists to spend endless hours doing drudge-work that would be awesome.

It would still leave tons of room for creative art though. Just because your "make a movie of any novel any time you like" software comes with a bunch of off the shelf models of objects-found-in-novels with which to illustrate novels in no way means that an arist who makes a different set of objects the same software can use will not be able to find buyers; quite likely many people will be willing to buy object-sets that they find more pleasing to their eyes than any one set of objects already out there.

Look at Second Life, in Second Life you can edit your avatar, but people still hire artists to manually and painstakingly make a whole new different avatar or skin depicting the player.

So I think this kind of automation might even increase the market for custom hand-made artwork, since once anyone can have a model representing them or their house or whatever just by telling the computer various instructions like "give me bigger ears... darker skin... quiver over my left shoulder... gold ring on my left ring-finger..." etc, there will probably be people who will still want hand-crafted ones if even just to be able to say "ha ha my avatar is better than yours because good luck describing mine and having the default avatar-building software duplicate it without outright copying the hand-crafted skin that I am wearing".

But y'see for free open source we wouldn't want them to be wearing a hand-crafted skin that isn't free open source, because we want to be able to depict their character freely on other servers, take a copy home and modify it as we wish and so on and so on.

The big thing I suspect is the thinking in terms of composing from components. The actual skin and the actual frame over which to put the skin is better than just a bunch of 2D images of the avatar as seen from various angles.

So we should try to have models of all the things shown in a photograph in preference to the photograph itself, the models and skins for the creatures instead of just single views of creatures as seen from various angles and so on.

-MarkM-


I understand.

But if creating unlimited worlds/stories/forms is simply a case of total parameterization of available source material, we would already have such software available.

Face it, 99% of all permutations will be useless, and the big question is "how will a computer figure out what the good nicelooking permutations are?"
For the computer every number looks the same.

Just because you think you have taste and can recognize a good set of parameters within one second... doesn't mean the computer can do the same in even extremely long timeframes.

Yes sir, please go on and program an algorithm that mimics the process of good taste or that can recognize beauty or uglyness. Because that's what your computer would have to be able to do too. Not just spit out permutations, but also filter out the crap.


I remember some years ago I did a simple yet effective program that was something like a face creator.
I drew 10 different noses, 10 different head forms, 10 different eyes, 10 different mouths, etc...

in the end I had about 20 different features, each with 10 possible states.

Then by letting the program choose one set of each feature randomly I could potentially create 10^20 different faces.
It was hilarious and funny, but after a while it became clear that although no two faces looked exactly the same... an uncanny similarity was embedded in all faces still.

What I want to say with this example is that although permutations lead you to believe that we have astronomical amounts of possibilities... most of those permutations will be useless.

Much like all possible chess moves are more then the sum of all atoms in the universe, still only a very very small subsection of this astronomical number are what we would call "interesting chess matches".


It's in the eye of the beholder. And the beholder IS human and not a machine.
For your plan to work you would need to already have a complete simulation of a human.
Then you could really go ahead and work thru your endless permutations with brute force,
and continuously check the reaction of the simulated human with every set.

...

PS time to go to bed, talk to you tommorrow.



sidhujag
Legendary
*
Offline Offline

Activity: 2044
Merit: 1005


View Profile
December 23, 2013, 05:08:37 AM
 #2875

Y not create more bounties the new wallet needs testing? Any immediate work? The new pr work im doing is going to be good shit I suggest it be worth more than 12 shares its going
to wider audience and I will adhere to higher standards ( up to quality admins to judge before release)

I would like to propose a bounty for qt images and new icons.  8 shares for the images and icons used in the qt, 4 shares for the second best set of icons.

Any objections, or should anything be changed??

I went ahead and did this...uploading and updating the update thread soon. v1.0.2 will have the new images/icons... testnet and normal included aswell as images to be used in the installer (nsis banner and sidebar image)

Also the installer, there is builtin support in the source code repo for an NSIS installer, and I think it may be tied to the build release system, which would automate the installer with correct version information. This is the preferred method to create an installer instead of manually. I propose we use this installer. I will try to make an install with the new client to show you guys.

I don't know the windows build or release process, but as linux, bitcoin 0.8.6 has intergrate automake system, which is a great step up for building system. if not too hard, could we just update it 0.8.6?

I want to fully testwith 0.8.5 first to make sure we didn't introduce any malfunction in the way it works. The code is totally different and we have to be confident that we can roll fwd from this point on. After testing is done I will work on 0.8.6 and look ahead to 0.9 upcoming. We still have to decide on the fee changes and/or implications to inflation rate structure if we go there. This will cause a hard fork.
sidhujag
Legendary
*
Offline Offline

Activity: 2044
Merit: 1005


View Profile
December 23, 2013, 05:11:04 AM
 #2876

Please see the devcoin source code update thread: https://bitcointalk.org/index.php?topic=310280.0

I made version 1.0.2 with updated icons and graphics... looks better now.

Run with devcoin-qt.exe -testnet to see the testnet startup.
markm
Legendary
*
Offline Offline

Activity: 2996
Merit: 1121



View Profile WWW
December 23, 2013, 05:16:12 AM
Last edit: December 23, 2013, 05:30:19 AM by markm
 #2877

On the topic of Battle for Wesnoth, all the campaigns mentioned in Devtome date back to a version of the Battle for Wesnoth software that is now a few versions out of date.

Accordingly all those campaigns need running through the "lint for campaigns" syntax-checker that automatically updates what can be updated automatically, then all the things that tool points out that it cannot automatically correct will need to be manually corrected.

In addition, none or almost none of the scenarios in those campaigns have soundtracks. At best they maybe just run Wesnoth's default playlist or maybe might have chosen one from Wesnoth's very limited repertoire of soundtracks, or maybe even just let the program pick randomly from its default playlist.

Some of those campaigns are behind the times more than merely in the sense of being coded for out of date versions of Wesnoth but also in terms of not keeping abreast of what has actually being going on around them.

For example, so far none of those scenarios mentions Devtome, whether or not any of the characters in them have even heard of Devtome is not specified, which of those characters also writes Devtome articles in addition to creating holobarracks programs (such as those very Battle for Wesnoth campaigns themselves, all of which are attributed to at least one of the characters found in at least one of those campaigns) and so on.

Currently the only way players of those campaigns would be led to discover Devtome is by following the clues that lead to such things as the CrossCiv server (or ... oops, I was going to write MUDgaard but then it occurred to me that none of those campaigns mention MUDgaard either, they are so out of date! ...) and meeting therein some player who thinks to mention Devtome to them.

So it would be nice if someone brought the campaigns up to date with the latest version of Battle for Wesnoth, whereupon adding references to Devtome and MUDgaard might make more sense (since being use-able with the current version should result in more users than if users have to install an old version of Battle for Wesnoth in order to play those campaigns) than it would right now.

(For those who are not aware of the fact, maybe it is worth mentioning that this (Devcoin, Devtome etc) whole project, like the GNU project that in the campaigns is refered to by terms along the lines of "Grand Nexus Uberplot", and Battle for Wesnoth itself that is characterised as a form of holodeck-programmer training-tool for deployment on planets on which the deployment of actual holodecks is deprecated, is part of the game...)

-MarkM-

Browser-launched Crossfire client now online (select CrossCiv server for Galactic  Milieu)
Free website hosting with PHP, MySQL etc: http://hosting.knotwork.com/
emfox
Full Member
***
Offline Offline

Activity: 276
Merit: 102


View Profile
December 23, 2013, 05:25:14 AM
 #2878

Y not create more bounties the new wallet needs testing? Any immediate work? The new pr work im doing is going to be good shit I suggest it be worth more than 12 shares its going
to wider audience and I will adhere to higher standards ( up to quality admins to judge before release)

I would like to propose a bounty for qt images and new icons.  8 shares for the images and icons used in the qt, 4 shares for the second best set of icons.

Any objections, or should anything be changed??

I went ahead and did this...uploading and updating the update thread soon. v1.0.2 will have the new images/icons... testnet and normal included aswell as images to be used in the installer (nsis banner and sidebar image)

Also the installer, there is builtin support in the source code repo for an NSIS installer, and I think it may be tied to the build release system, which would automate the installer with correct version information. This is the preferred method to create an installer instead of manually. I propose we use this installer. I will try to make an install with the new client to show you guys.

I don't know the windows build or release process, but as linux, bitcoin 0.8.6 has intergrate automake system, which is a great step up for building system. if not too hard, could we just update it 0.8.6?

I want to fully testwith 0.8.5 first to make sure we didn't introduce any malfunction in the way it works. The code is totally different and we have to be confident that we can roll fwd from this point on. After testing is done I will work on 0.8.6 and look ahead to 0.9 upcoming. We still have to decide on the fee changes and/or implications to inflation rate structure if we go there. This will cause a hard fork.

inflation? it will big hard fork and will change many thing... I don't know we have this discussed before...

Earn Devcoins by Writing
BTC: 1Emfox1WswYcd2YucUskRzqfRWKkcm1Jut DVC: 1Emfox1WswYcd2YucUskRzqfRWKkcm1Jut
IXC: xnRKo3qSDdcPJ4pgTLER3orkquUVQXeLwf
sidhujag
Legendary
*
Offline Offline

Activity: 2044
Merit: 1005


View Profile
December 23, 2013, 05:46:11 AM
 #2879

Y not create more bounties the new wallet needs testing? Any immediate work? The new pr work im doing is going to be good shit I suggest it be worth more than 12 shares its going
to wider audience and I will adhere to higher standards ( up to quality admins to judge before release)

I would like to propose a bounty for qt images and new icons.  8 shares for the images and icons used in the qt, 4 shares for the second best set of icons.

Any objections, or should anything be changed??

I went ahead and did this...uploading and updating the update thread soon. v1.0.2 will have the new images/icons... testnet and normal included aswell as images to be used in the installer (nsis banner and sidebar image)

Also the installer, there is builtin support in the source code repo for an NSIS installer, and I think it may be tied to the build release system, which would automate the installer with correct version information. This is the preferred method to create an installer instead of manually. I propose we use this installer. I will try to make an install with the new client to show you guys.

I don't know the windows build or release process, but as linux, bitcoin 0.8.6 has intergrate automake system, which is a great step up for building system. if not too hard, could we just update it 0.8.6?

I want to fully testwith 0.8.5 first to make sure we didn't introduce any malfunction in the way it works. The code is totally different and we have to be confident that we can roll fwd from this point on. After testing is done I will work on 0.8.6 and look ahead to 0.9 upcoming. We still have to decide on the fee changes and/or implications to inflation rate structure if we go there. This will cause a hard fork.

inflation? it will big hard fork and will change many thing... I don't know we have this discussed before...

I just brought it up because the fees are still based on 1000x higher fees with bitcoin based on the 50 coin block reward but its changed now.. so the fees may need to double to match bitcoins now and when I first joined the project I thought the goal was to be 1000x inflationary than bitcoin but that relation breaks when bitcoin reward halves so I proposed we halve with it until say a min of like 1000 coins per
block when bitcoin is at 1... this will keepthe ratio up ttill this point and then split off. I just threw it out there as my idea of a good idea may not be the right choice and unthinkingbit makes the final decision.. He knows Ive been about it for a while now.

So my question is if we dicussing fees do we discuss inflation too? or meh.
markm
Legendary
*
Offline Offline

Activity: 2996
Merit: 1121



View Profile WWW
December 23, 2013, 05:59:51 AM
 #2880

We need to keep on minting coins forever without halving the minting, because we need to keep sending 90% of the coins to the people/projects (addresses) that are listed in the receiver files.

-MarkM-

Browser-launched Crossfire client now online (select CrossCiv server for Galactic  Milieu)
Free website hosting with PHP, MySQL etc: http://hosting.knotwork.com/
Pages: « 1 ... 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 [144] 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 ... 443 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!