Bitcoin Forum
April 25, 2024, 07:54:26 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Poll
Question: Viᖚes (social currency unit)?
like - 27 (27.6%)
might work - 10 (10.2%)
dislike - 17 (17.3%)
prefer tech name, e.g. factom, ion, ethereum, iota, epsilon - 15 (15.3%)
prefer explicit currency name, e.g. net⚷eys, neㄘcash, ᨇcash, mycash, bitoken, netoken, cyberbit, bitcash - 2 (2%)
problematic - 2 (2%)
offending / repulsive - 4 (4.1%)
project objectives unrealistic or incorrect - 10 (10.2%)
biased against lead dev or project ethos - 11 (11.2%)
Total Voters: 98

Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 [49] 50 51 52 53 54 55 56 57 58 59 60 61 62 »
  Print  
Author Topic: [neㄘcash, ᨇcash, net⚷eys, or viᖚes?] Name AnonyMint's vapor coin?  (Read 95215 times)
This is a self-moderated topic. If you do not want to be moderated by the person who started this topic, create a new topic.
TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
April 11, 2016, 02:44:16 AM
 #961

You will find that if you look at applications written in C they are built up using layers of libraries. By the time you get to what might be called the business logic (although a "business" might not necessarily be involved -- this is a term of art) you are barely using C any more, just a bunch of library calls where almost every language ends up being almost identical.

I'm not a fan C for larger team projects because the process of building up your own libraries on top of libraries is difficult to express in a constant and reliable way. Not impossible, but difficult. It tends to work better for a project written and maintained by one person or perhaps a very small group

Good point that due to the lack of a modern type system, C is really limited on the orthogonal coding, modules aspect. Intermixing/interopting with Javascript via Emscripten makes this even more brittle.

I think, though I'm not sure, that most C coding these days (which still seems to be quite popular) is indeed system programming, firmware for devices, performance-critical callouts from other languages (Python for example), etc. But I have few if any sources for that, and it may be completely wrong.

Well that seems to be true and my Github exemplifies me coding in C for those cases, but it would be better if I didn't have to add the thorn of C and FFI to interopt C with my other code.

The network tries to produce one block per 10 minutes. It does this by automatically adjusting how difficult it is to produce blocks.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
AlexGR
Legendary
*
Offline Offline

Activity: 1708
Merit: 1049



View Profile
April 11, 2016, 03:00:04 AM
 #962

Personally I'm absolutely amazed how such a weird language like c, that was intended to write ...OPERATING SYSTEMS, has been used to this day to write ...applications.

Are you a programmer AlexGR? I understood from the other thread discussion we had that you mostly aren't, but not sure.

It depends on the definition. I do not consider myself one but I've written some simple programs, mainly for my own use, in the past.

I still code occasionally some things I need. For example I started recently (an attempt) to implement some neural network training to find patterns in SHA256, to see if it might be feasible to create a counterbalanced approach to ASIC mining, by pooling cpu and gpu resources in a distributed-processing neural network with tens of thousands of nodes (the equivalent of a mining pool). Anyway while I think the idea has merit, I didn't get very far. Now I'm contemplating whether neural networks can find weaknesses in hashing functions like cryptonight, x11, etc etc and leverage this for shortcutting the hashing. In theory all hashing should be pretty (pseudo)random but since there is no such thing as random it's just a question of finding the pattern... and where "bad crypto" is involved, it could be far easier to do so.

Anyway these are a mess in terms of complexity (for my skills) and I'm way over my head due to my multifaceted ignorance. My background wasn't that much to begin with: I started with zx spectrum basic back in the 80s then did some basic and pascal for the pc, then did some assembly.... and then the Internet came and I kind of lost interest in "singular" programming. In the "offline" days it was easy to just stick around a screen, with not much distraction, and "burn" yourself to exhaustion, staring the screen for 15 hours to write a program. In the post-internet era that much was impossible... ICQ, msn, forums, all these gradually reduced my attention span and intention to focus.

I always hated c with a passion while pascal was much better aligned with how I was thinking. It helped enormously that TPascal had a great IDE for DOS, a nice F1-Help reference etc. Now I'm using free pascal for linux, which is kind of similar, but not for anything serious. As for c, I try to understand what programs do so that I can change a few lines to do my work in a different way.

In theory, both languages (or most languages?) do the same job... you call the library you want, use the part you want as the "command" and voila. Where they may differ is code execution speed and other more subtle things. Library support is obviously not the same, although I think fpc supports some c libs as well.

Still, despite my preference for the way pascal was designed as a more human-friendly language, it falls waaaaaaaaaay short of what I think a language should actually be.

Syntax should be very simple and understandable - kind of like pseudocode. Stuff like integers overflowing, variables not having the precision they need, programs crashing due to idiotic stuff like divisions by zero, having buffers overflow etc etc - things like that are ridiculous and shouldn't even exist.

By default a language should have maximum allowance for the sizes that a user could fit in vars and consts but also let the user fine tune it only if he likes to do so to increase speed (although a good compiler should be able to tell which sizes can be downsized safely and do so, by itself, anyway). Compilers should be infinitely better, taking PROPER advantage of our hardware. How the hell can we be like over 15 years later than SSE2 and not having them properly used? You make a password cracker and you have to manually insert ...sse intrinsics. I mean wtf? Is this for real? It's not an instruction set that was deployed last month - it's there for ages. Same for SSE3/SSSE3/SSE4.1/4.2 that are like 8yr old or more and go unused. How the hell can a language pretend to be "fast" when it throws away SIMD acceleration, is beyond me. But that's not the fault of the language (=> the compiler is at fault). Taking advantage of GPU resources should also be a more automated process by now.

Quote
I think, though I'm not sure, that most C coding these days (which still seems to be quite popular) is indeed system programming, firmware for devices, performance-critical callouts from other languages (Python for example), etc. But I have few if any sources for that, and it may be completely wrong.

OS are definitely C but almost the majority of OS apps are also C, especially in linux.
smooth
Legendary
*
Offline Offline

Activity: 2968
Merit: 1198



View Profile
April 11, 2016, 03:08:13 AM
 #963

Compilers (including C) do autovectorize and use SSE, etc. instructions. It's not always at the same level you could get by doing it explicitly but it does happen.

As far as the rest, programming languages are a nasty thicket. Every decision seems like some sort of tradeoff with few pure wins. Make it better in some way and it gets worse in others.

Yes most OS apps on Linux are C but that is largely legacy. They were written 20 years ago, and rewriting doesn't have major advantages (if any).

I guess that's another place C gets its current high level of popularity and that is maintenance of legacy applications. There are a lot of them.

If you look around at software that is being developed today by Google, Amazon, etc., very little of it is written in C (other than maybe low level hardware stuff as I mentioned earlier)
TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
April 11, 2016, 03:39:15 AM
 #964

Stuff like integers overflowing, variables not having the precision they need, programs crashing due to idiotic stuff like divisions by zero, having buffers overflow etc etc - things like that are ridiculous and shouldn't even exist.

Impossible unless you want to forsake performance and degrees-of-freedom. There is a tension between infinite capability that requires 0 performance and 0 degrees-of-freedom.

Sorry some of the details of programming that you wish would disappear, can't.

TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
April 11, 2016, 05:54:45 AM
 #965

As far as the rest, programming languages are a nasty thicket. Every decision seems like some sort of tradeoff with few pure wins. Make it better in some way and it gets worse in others.

Your Language Sucks Because...lol.

I'm formulating a theory that the worsening is due to ideological decisions that were not based on the pragmatic science of utility and practicality.

Examples:

1. Rust abandoning Typestate (as I warned in 2011) which was originally motivated by the desire to assert every invariant in the program. But of course infinite assertion is 0 performance and 0 degrees-of-freedom. Rather you choose a type system that meets the practical balance between utility and expression. And Typestate was not holistically unified with the type system of the language, thus it was doomed (as I had warned).

2. Scala was started as an ideological experiment (grown out of the Pizza project which added generics to Java) to marry functional programming concepts and OOP. Problem is that subclassing is an anti-pattern because for example the Liskov Substitution Principle is easily violated by subsclasses. Here is my Scala Google group post where I explained that Scala 3's DOT is trying to unify (the anti-pattern) subclassing and abstract types. Here is my post where I explained why I think Rust-like interfaces (Haskell Typeclasses) with first-class unions (that I had proposed for Scala) instead of subclass subsumption is the correct practical model of maximum utility and not an anti-pattern. Oh my, I was writing that when I was 4 days into my 10-day water only fasting that corresponded with my leap off the health cliff last year. Here is my post from April 2012 (right before my May hospitalization that started the acute phase of my illness) wherein I explained that "the bijective contract of subclassing is undecidable". Note Scala's complexity (generics are Turing complete thus generally undecidable same as for C++) means the compiler is uber slow making it entirely unsuitable for JIT compilation.

3. James Gosling's prioritization of simplification as the ideological justification for the train wreck that is Java's and JVM's lack of unsigned types (and for originally lacking generics and still no operator overloading).

4. Python's ideology (or ignorance) to remove the shackles of a unified type system which means not only does it have corner case gotchas, but it can never be statically checked compiled language thus can't be JIT optimized.

5. C++'s ideology of marrying backward compatibility with C (which is not entirely achieved) with OOP (including the subclassing anti-pattern on steroids with multiple diamond inheritance) is a massive clusterfuck of fugly complexity and corner cases.

P.S. I need to revisit the ideology of category theory applied to programming to try to analyze its utility more practically.

AlexGR
Legendary
*
Offline Offline

Activity: 1708
Merit: 1049



View Profile
April 11, 2016, 06:24:33 AM
Last edit: April 11, 2016, 06:37:19 AM by AlexGR
 #966

Stuff like integers overflowing, variables not having the precision they need, programs crashing due to idiotic stuff like divisions by zero, having buffers overflow etc etc - things like that are ridiculous and shouldn't even exist.

Impossible unless you want to forsake performance and degrees-of-freedom. There is a tension between infinite capability that requires 0 performance and 0 degrees-of-freedom.

Sorry some of the details of programming that you wish would disappear, can't.

The time when programmers tried to save every possible byte to increase performance is long gone because the bytes and the clock cycles aren't that scarce anymore. So we are in the age of routinely forsaking performance just because we can, and we do so despite using - theoretically, very efficient languages. The bloat is even compensating for hardware increases - and even abuses the hardware more than previous generations of software.

You see some programs like, say, Firefox, Chrome, etc etc, that should supposedly be very efficient as they are written in C (or C++), right? Yet they are bloated like hell, consuming gigabytes of ram for the lolz. And while some features, like sandboxing processes so that the browser won't crash, do make sense in terms of ram waste, the rest of the functionality doesn't make any sense in terms of wasting resources. I remember in the win2k days, I could open over 40-50 netscape windows without any issue whatsoever.

Same bloat situation for my linux desktop (KDE Plasma)... It's a piece of bloated junk. KDE 4 was far better. Same for windows... Win7 wants 1gb ram just to run. 1gb? For what? The OS? Are they even thinking when they make the specs?

Same for our cryptocurrency wallets. Slow as hell and very resource-hungry.

In any case, if better languages don't exist, the code that should be written won't be written because there will be a lack of developers to do so. Applications that will never be written won't run that fast... they won't even exist!

Now that's a prime corporate problem. "We need able coders to develop X, Y, Z". Yeah well, there are only so many of them and the corporations won't be able to hire as many as they want. So...

In the 80's they thought if they train more coders they'll solve the problem. In the 90's they discovered that coders are born, not educated - which was very counterintuitive. How can you have a class of 100 people and 10 good coders and then have a class of 1000 people and 20 good coders instead of 100? Why aren't they increasing linearly but rather the number of good coders seem to be related to some kind of "talent" like ....music? Well, that's the million dollar question, isn't it?

I searched for that answer myself. I'm of the opinion that knowledge is teachable so it didn't make any sense. My mind couldn't grasp why I was rejecting C when I could write asm. Well, after some research I found the answer to the problem. The whole structure of C, which was somehow elevated as the most popular programming language for applications, originates from a certain mind-set that C creators had. There are minds who think like those who made it, and minds who don't - minds who reject this structure as bullshit.

The minds who are rejecting this may have issues with what they perceive as unnecessary complexity, ordering of things, counter-intuitive syntax, etc. In my case I could read a line of asm and know what it did (like moving a value to a register or calling an IRQ) and then read a line of C and have 3 question-marks on what that fuckin line did. I've also reached the conclusion that it is impossible to have an alignment with C-thinking (preferring it as a native environment) without having anomalies in the way one thinks in life, in general. It's similar to the autism drawback of some super-intelligent people, but in this case it is much milder and doesn't need to be at the level of autism. Accepting the rules of the language, as this is, without much internal mind-chatter objection of "what the fuck is this shit" and just getting on with it, is, in a way, the root cause why there are so few people using it at any competent level globally. If more people had rejected it then we'd probably have something far better by now, with more popular adoption.

There are so many languages springing up but they are all trying to be the next C instead of being unique. For sure, programmer's convenience in transitioning is good, but it should be abandoned in favor of much friendlier languages.

While all this is nice in theory the problem will eventually be solved by AI - not language authors. It will ask what we want in terms of execution and we'll get it automatically by the AI-programming bot who will then be sending the best possible code for execution.

AI-Generic interface: "What do you want today Alex?"
Alex: "I want to find the correlation data of soil composition, soil appearance, minerals and vegetation, by combining every known pattern recognition, in the data set I will be uploading to you in a while".
AI-Generic interface: "OOOOOOK, coming right up" (coding it from the specs given)
AI-Generic interface: "You are ready... do you want me to run the program with the data in the flash drive you just connected?"
Alex: "Sure".

This means that the only thing we need to program is a self-learning AI. Once this is done, it will be able to do everything that a coder can do, even better and faster. It will be able to do what an if-then-else idiotic compiler does today, but far better in terms of optimizations and hardware exploitation. Most of the optimizations that aren't done, is because the if-then-else compiler doesn't recognize the logic behind the program and if the optimization would be safe or not. But if programmer and compiler are the same "being" then things can start ...flying in terms of efficiency.

This got futuristic very fast, but these aren't so futuristic anymore. They might have been 10 years ago but now it's getting increasingly closer.
TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
April 11, 2016, 12:45:33 PM
Last edit: April 12, 2016, 01:02:33 AM by TPTB_need_war
 #967

Stuff like integers overflowing, variables not having the precision they need, programs crashing due to idiotic stuff like divisions by zero, having buffers overflow etc etc - things like that are ridiculous and shouldn't even exist.

Impossible unless you want to forsake performance and degrees-of-freedom. There is a tension between infinite capability that requires 0 performance and 0 degrees-of-freedom.

Sorry some of the details of programming that you wish would disappear, can't.

The time when programmers tried to save every possible byte to increase performance is long gone because the bytes and the clock cycles aren't that scarce anymore.

That is the propaganda but it isn't true. There are many scenarios where optimization is still critical, e.g. block chain data structures, JIT compilation so that web pages (or in my goal Apps) can start immediately upon download of device-independent code, cryptography algorithms such as the C on my github.

So we are in the age of routinely forsaking performance just because we can, and we do so despite using - theoretically, very efficient languages.

We almost never should ignore optimization for mobile Apps because otherwise it consumes the battery faster.

You want to wish away the art of programming and have it replaced by an algorithm, but I am sorry to tell you that you and Ray Kurzweil have the wrong conceptualization of knowledge (<--- I wrote that "Information Is Alive!").


The bloat is even compensating for hardware increases - and even abuses the hardware more than previous generations of software.

There are still people in this world who can't afford 32 GB of RAM and if we forsake optimization, then a power user such as myself with 200 tabs open on my browser will need 256 GB of RAM. A typical mobile phone only has 1 GB.

I have to daily reboot my Linux box because I only have 16 GM of RAM and the garbage collector of the browser totally freezes my machine.


You see some programs like, say, Firefox, Chrome, etc etc, that should supposedly be very efficient as they are written in C (or C++), right? Yet they are bloated like hell, consuming gigabytes of ram for the lolz.

I am thinking you are just spouting off without actually knowing the technical facts. The reason is ostensibly because for example Javacript forces the use of garbage collection, which is an example of an algorithm which is not as accurate and performant as expert human designed memory deallocation. And because memory allocation is a hard problem, which is ostensibly why Mozilla funded the creation of Rust with its new statically compiled memory deallocation model to aid the human with compiler enforced rules.

And while some features, like sandboxing processes so that the browser won't crash, do make sense in terms of ram waste, the rest of the functionality doesn't make any sense in terms of wasting resources. I remember in the win2k days, I could open over 40-50 netscape windows without any issue whatsoever.

Same bloat situation for my linux desktop (KDE Plasma)... It's a piece of bloated junk. KDE 4 was far better. Same for windows... Win7 wants 1gb ram just to run. 1gb? For what? The OS? Are they even thinking when they make the specs?

Man haven't you noticed that the capabilities of webpages have increased. You had no DHTML then thus nearly no Javascript or Flash on the web page.

Above you argued that bloat doesn't matter and now you argue it does.   Roll Eyes


In the 80's they thought if they train more coders they'll solve the problem. In the 90's they discovered that coders are born, not educated - which was very counterintuitive. How can you have a class of 100 people and 10 good coders and then have a class of 1000 people and 20 good coders instead of 100?

Because the 100 were populated by more natural hackers (e.g. as a kid I took apart all my toys instead of playing with them) who discovered the obscure field because of their interest. The 1000 was probably advertized to people who shouldn't be pursuing programming, which ostensibly includes yourself.

Why aren't they increasing linearly but rather the number of good coders seem to be related to some kind of "talent" like ....music? Well, that's the million dollar question, isn't it?

I searched for that answer myself. I'm of the opinion that knowledge is teachable so it didn't make any sense.

Absolutely not. Knowledge is serendipitiously and accretively formed.

If you have the natural inclination to some field, then you can absorb the existing art and add to it serendipitiously and accretively. If you don't have the natural inclination, no amount of pounding books on your head will result in any traction.


My mind couldn't grasp why I was rejecting C when I could write asm. Well, after some research I found the answer to the problem. The whole structure of C, which was somehow elevated as the most popular programming language for applications, originates from a certain mind-set that C creators had. There are minds who think like those who made it, and minds who don't - minds who reject this structure as bullshit.

Agreed you do not appear to have the natural inclination for C. But C is not bullshit. It was a very elegant portable abstraction of assembly that radically increased productivity over assembly.

The minds who are rejecting this may have issues with what they perceive as unnecessary complexity, ordering of things, counter-intuitive syntax, etc. In my case I could read a line of asm and know what it did (like moving a value to a register or calling an IRQ) and then read a line of C and have 3 question-marks on what that fuckin line did.

What you are describing is that fact that C abstracts assembly language. You must lift you mind into a new machine model which is the semantics of C. The semantics are no less rational than assembly.

You apparently don't have an abstract mind. This was also evident by our recent debate about ethics, wherein you didn't conceive of objective ethics.

You probably are not very good at abstract math as well.

Your talent probably lies else where. I don't know why you try to force on yourself something that your mind is not structured to do well.


I've also reached the conclusion that it is impossible to have an alignment with C-thinking (preferring it as a native environment) without having anomalies in the way one thinks in life, in general. It's similar to the autism drawback of some super-intelligent people, but in this case it is much milder and doesn't need to be at the level of autism. Accepting the rules of the language, as this is, without much internal mind-chatter objection of "what the fuck is this shit" and just getting on with it, is, in a way, the root cause why there are so few people using it at any competent level globally. If more people had rejected it then we'd probably have something far better by now, with more popular adoption.

I strongly suggest you stop blaming your handicaps and talents on others:

Ego is for little people

My claim is that egotism is a disease of the incapable, and vanishes or nearly vanishes among the super-capable.

I’m the crippled kid who became a black-belt martial artist and teacher of martial artists. I’ve made the New York Times bestseller list as a writer. You can hardly use a browser, a cellphone, or a game console without relying on my code. I’ve been a session musician on two records. I’ve blown up the software industry once, reinvented the hacker culture twice, and am without doubt one of the dozen most famous geeks alive. Investment bankers pay me $300 an hour to yak at them because I have a track record as a shrewd business analyst. I don’t even have a BS, yet there’s been an entire academic cottage industry devoted to writing exegeses of my work. I could do nothing but speaking tours for the rest of my life and still be overbooked. Earnest people have struggled their whole lives to change the world less than I routinely do when I’m not even really trying. Here’s the point: In what way would it make sense for me to be in ego or status competition with anybody?

I think there are a couple of different reasons people tend to falsely attribute pathological, oversensitive egos to A-listers. Each reason is in its own way worth taking a look at.

The first and most obvious reason is projection. “Wow, if I were as talented as Terry Pratchett, I know I’d have a huge ego about it, so I guess he must.” Heh. Trust me on this; he doesn’t. This kind of thinking reveals a a lot about somebody’s ego and insecurity, alright, but not Terry’s.

Finally, I think a lot of people need to believe that A-listers invariably have flaws in proportion to their capabilities in order not to feel dwarfed by them. Thus the widely cherished belief that geniuses are commonly mentally unstable; it’s not true (admissions to mental hospitals per thousand drop with increasing IQ and in professions that select for intelligence, with the lowest numbers among mathematicians and theoretical physicists) but if you don’t happen to be a genius yourself it’s very comforting. Similarly, a dullard who believes A-listers are all flaky temperamental egotists can console himself that, though he may not be smarter than them, he is better. And so it goes.

Ego is for little people. I wish I could finish by saying something anodyne about how we’re all little when you come down to it, but I’d be fibbing. Yeah, we’re all little compared to a supernova, but that’s beside the point. And yeah, the most capable people in the world are routinely humbled by what they don’t know and can’t do, but that is beside the point too. If you look at how humans relate to other humans – and in particular, how they manage self-image and “ego” and evaluate their status with respect to others…it really is different near the top end of the human capability range. Better. Calmer. Sorry, but it’ s true.



There are so many languages springing up but they are all trying to be the next C instead of being unique. For sure, programmer's convenience in transitioning is good, but it should be abandoned in favor of much friendlier languages.

You are not an expert programmer, ostensibly never will be, and should not be commenting on the future of programming languages. Period. Sorry. Programming will never be reduced to an art form for people who hate abstractions, details, and complexity.

Designing an excellent programming language is one of the most expert of all challenges in computer science.


While all this is nice in theory the problem will eventually be solved by AI - not language authors. It will ask what we want in terms of execution and we'll get it automatically by the AI-programming bot who will then be sending the best possible code for execution.

Sorry never! You and Ray Kurzweil are wrong and always will be. Sorry, but frankly. I don't care if you disagree, because you don't have the capacity to understand.

AlexGR
Legendary
*
Offline Offline

Activity: 1708
Merit: 1049



View Profile
April 11, 2016, 04:16:44 PM
 #968

That is the propaganda but it isn't true. There are many scenarios where optimization is still critical, e.g. block chain data structures, JIT compilation so that web pages (or in my goal Apps) can start immediately upon download of device-independent code, cryptography algorithms such as the C on my github.

It's not propaganda. It's, let's say, the mainstream practice. Yes, there are things which are constantly getting optimized more and more, but every time I see serious optimizations needed, they all have to go down to asm. Why? Because c, as a source, is just a textfile. The actual power of the language is the compiler. And the compilers we have suck badly - even the corporate ones like Intel's icc - which is better in vectorizing and often a source for ripping off its generated code, but still.

Quote
We almost never should ignore optimization for mobile Apps because otherwise it consumes the battery faster.

True. But sometimes the problem is at the kernel level. If you do a make menuconfig for the kernel and you go all the options one by one, you realize a very problematic truth: one piece of software may be deciding when to preempt, another routine may be evaluating cache usage and cache flushing, another may be evaluating idleness, another may be checking for untrusted code execution, another may be checking for buffer overflows in order to reboot as a precaution, another is checking whether there is stuff to debug, another is logging everything, another is doing real-time performance analysis, etc etc etc. It's like for every process doing work there are 20 processes watching over the 1 that actually does something - and each one promising either "low overhead" or "performance improvement". All combined they create a bloated feel.

Quote
There are still people in this world who can't afford 32 GB of RAM and if we forsake optimization, then a power user such as myself with 200 tabs open on my browser will need 256 GB of RAM. A typical mobile phone only has 1 GB.

I have to daily reboot my Linux box because I only have 16 GM of RAM and the garbage collector of the browser totally freezes my machine.


I have 4 on my desktop and 1 on my laptop running a pentium-m 2.1ghz, single core (like >10yr old).

"Amazingly" running 2012 pc linux 32 bit, the 1gb laptop can run stuff like chromium, openoffice, torrents, etc etc, all at the same time and without much problem.
 
My quad-core desktop 4gb/64 bit is so bloated that it's not even funny. And I also run zram to compress ram in order to avoid swapping out to the ssd. In any case, thinking about this anomaly, I used 32bit browser packages on the 64bit desktop... ram was suddenly much better and I was doing the exact same things Roll Eyes

Quote
I am thinking you are just spouting off without actually knowing the technical facts. The reason is ostensibly because for example Javacript forces the use of garbage collection, which is an example of algorithm which is not as accurate and performant as expert human designed memory deallocation. And because memory allocation is a hard problem, which is ostensibly why Mozilla funded the creation of Rust with its new statically compiled memory deallocation model to aid the human with compiler enforced rules.

1) Let's just say that even the packages of the distribution are garbage compared to the mozilla builds (PGO-ptimized). Download a mozilla build from mozilla and then download a precompiled binary from your distro. Run a javascript benchmark and tell me the results. Same C language, much different results for the user depending where he actually got his binary.

2) Even by loading a single blank page, mozilla wants >200mb.

3) Loading 5 tabs with 5 different sites, where none is running any js (noscript activated), and there is no flash involved, I go to ~400mb. This is bullshit. I doubt their problems lie in the language.

4) Seriously, wtf are you loading and you need a reboot every day with 16gb? Roll Eyes


Man haven't you noticed that the capabilities of webpages have increased. You had no DHTML then thus nearly no Javascript or Flash on the web page.

With firefox I don't have flash, I have ipc container off, I block javascript (and the pages look like half-downloaded shit) and still the bloat is there.

Quote
Above you argued that bloat doesn't matter and now you argue it does.   Roll Eyes

My argument is that people using C hasn't done wonders for speed due to the existence of bloat and bad practices.

We are immersed in junk software that are abusing hardware resources and we are doing so with software written in languages that are supposedly very efficient and fast. I believe we can allow people to write software that is slower if the languages are much simpler. Microsoft did a step in that direction with VBasic in the 90s... I used it for my first GUI programs... interesting experience. And I'd bet that the programs I made back then are much less bloated than today's junk even if written in c.

Quote
Agreed you do not appear to have the natural inclination for C. But C is not bullshit. It was a very elegant portable abstraction of assembly that radically increased productivity over assembly.

It was useful to write a unix with back in 1970. It may not be useful today for coding everything with it.

Quote
Your talent probably lies else where. I don't know why you try to force on yourself something that your mind is not structured to do well.

I don't have the same issue with the structures of basic or pascal, so clearly it's not an issue of talent. More like a c-oriented or c-non-oriented way of thinking.

Quote
I strongly suggest you stop blaming your handicaps and talents on others:

Nice article. I can relate to what he's writing. However what I'm saying here is different. For example eating with chopsticks is not a talent/dexterity I'm envious of. I prefer forks because I consider them superior and I would not spend any of my time to learn eating with chopsticks. It's efficiency-oriented thinking. Coding C is ...chopsticks. We need forks. I can't say it any more simply than that.

You are not an expert programmer, ostensibly never will be, and should not be commenting on the future of programming languages. Period. Sorry. Programming will never be reduced to art form for people who hate abstractions, details, and complexity.

My problem is with unnecessary complexity. In any case, the problem is not to dumb down programming languages. I explicitly said that you can let the language be tunable to anything the programmer designs but you can also make it work out of the box with concessions in terms of speed.

In a way, even C is acting that way because where it is slow you have to go down to asm. But that's 2 layers of complexity instead of 1 layer of simplicity and 1 of elevated complexity.

Quote
Sorry never! You and Ray Kurzweil are wrong and always will be. Sorry, but frankly. I don't care if you disagree, because you don't have the capacity to understand.

Kurzweil's problem is not his "small" opinions on subject A or B. It's his overall problematic vision of what humanity should be, or, to put it better, what humanity should turn into (transhumanism / human+machine integration). This is fucked up.
TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
April 12, 2016, 02:01:12 AM
Last edit: April 12, 2016, 08:42:38 AM by TPTB_need_war
 #969

but every time I see serious optimizations needed, they all have to go down to asm.

You are again talking half-truths with half-lies.

If one were to write an entire application in assembly, they would miss high-level semantic optimizations.

You erroneously think the only appropriate tool for the job of programming is a low-level machine code semantics. You don't seem to grasp that we build higher-level semantics, so we can express (and reason about) what we need to. Example, if I really want to iterate List, then I want to say List.map() and not a few hundred lines of assembly code. The higher-level semantics of how I can employ category theory functors on that List impact more my optimization than recoding in assembly language. For example, in a non-lazy, inductive language (e.g. where Top is at that top of all types and not Haskell where Bottom populates all types) then I will need to manually do deforestation when chaining functors which terminate in the middle of the list.

Why? Because c, as a source, is just a textfile. The actual power of the language is the compiler. And the compilers we have suck badly - even the corporate ones like Intel's icc - which is better in vectorizing and often a source for ripping off its generated code, but still.

While it is true that for any given higher-level program code, that hand coding it in assembly can typically make it a few percentage faster and leaner (and in the odd case a significant percentage), this belies the reality that programming code is not static. It is being studied by others and improved. So that hand-coded assembly would have to be scrapped and rewritten each times edits are made to the higher-level program code, which is too much tsuris and too much of a waste of expert man-hours in most scenarios. If we didn't code in the higher-level programming language, then we couldn't express and optimize higher-level semantics.

Your comprehension of the art of programming is simpleton and incorrect.

There are still people in this world who can't afford 32 GB of RAM and if we forsake optimization, then a power user such as myself with 200 tabs open on my browser will need 256 GB of RAM. A typical mobile phone only has 1 GB.

I have to daily reboot my Linux box because I only have 16 GM of RAM and the garbage collector of the browser totally freezes my machine.


I have 4 on my desktop and 1 on my laptop running a pentium-m 2.1ghz, single core (like >10yr old).

"Amazingly" running 2012 pc linux 32 bit, the 1gb laptop can run stuff like chromium, openoffice, torrents, etc etc, all at the same time and without much problem.
 
My quad-core desktop 4gb/64 bit is so bloated that it's not even funny. And I also run zram to compress ram in order to avoid swapping out to the ssd. In any case, thinking about this anomaly, I used 32bit browser packages on the 64bit desktop... ram was suddenly much better and I was doing the exact same things Roll Eyes

Why are you offended or surprised that double-wide integers consume double the space, and that a newer platform is less optimized  Huh Progress is not free.

A Model T had resource-free airconditioning. I prefer the modern variant, especially here in the tropics.

I am thinking you are just spouting off without actually knowing the technical facts. The reason is ostensibly because for example Javacript forces the use of garbage collection, which is an example of an algorithm which is not as accurate and performant as expert human designed memory deallocation. And because memory allocation is a hard problem, which is ostensibly why Mozilla funded the creation of Rust with its new statically compiled memory deallocation model to aid the human with compiler enforced rules.

I doubt their [Mozilla Firefox's] problems lie in the language.

Yes and no. That is why they designed Rust to help them get better optimization and expression over the higher-level semantics of their program.

4) Seriously, wtf are you loading and you need a reboot every day with 16gb? Roll Eyes

YouTube seems to be a culprit. And other Flash videos such as NBA.com. Other times it is a bad script.


Man haven't you noticed that the capabilities of webpages have increased. You had no DHTML then thus nearly no Javascript or Flash on the web page.

With firefox I don't have flash, I have ipc container off, I block javascript (and the pages look like half-downloaded shit) and still the bloat is there.

Mozilla Firefox is ostensibly not sufficiently modularly designed to have all bloat associated with supporting rich media and scripting disappear when you disable certain plugins and features.

The is a higher-level semantics problem which belies your erroneous conceptualization of everything needing to be optimized in assembly code.

Above you argued that bloat doesn't matter and now you argue it does.   Roll Eyes

My argument is that people using C hasn't done wonders for speed due to the existence of bloat and bad practices.

We are immersed in junk software that are abusing hardware resources and we are doing so with software written in languages that are supposedly very efficient and fast. I believe we can allow people to write software that is slower if the languages are much simpler. Microsoft did a step in that direction with VBasic in the 90s... I used it for my first GUI programs... interesting experience. And I'd bet that the programs I made back then are much less bloated than today's junk even if written in c.

The solution is not to go backwards to lower-level semantics but to go even higher-level semantic models that C can't express, thus can't optimize.

Your conceptualization of the cause of the bloat and its solution is incorrect.

Your talent probably lies else where. I don't know why you try to force on yourself something that your mind is not structured to do well.

I don't have the same issue with the structures of basic or pascal, so clearly it's not an issue of talent. More like a c-oriented or c-non-oriented way of thinking.

C is more low-level and less structured than Pascal, which enables it to do more tricks (which can also be unsafe). Also you may prefer verbose words instead of symbols such as curly braces. Also you may prefer that Pascal makes explicit in the syntax what is implicit in the mind of the C programmer. You would probably hate Haskell with a deep disgust. I actually enjoy loading it all in my head and visualizing it as if I am the compiler.

Note I have argued that Haskell's use of spaces instead of parenthesis to group function inputs at the use-site, means the programmer has to keep the definition site declaration in his head in order to read the code. Thus I have argued parenthesis are superior for readability. There are counter arguments and especially parenthesis can't group arguments when the function name is used in the infix position.

Note I got a very high grade in both Pascal and Fortran, so it isn't like a skill in C is mutually exclusive with a skill in those other more structured languages, although that I'm not a counter-example to the converse.

TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
April 12, 2016, 02:16:36 AM
 #970

AlexGR, it is possible that what you were trying to say is that C is too low-level to be able to reduce higher-level semantic driven bloat and that C doesn't not produce the most optimum assembly code, so might as well use assembly for low-level and a higher-level language otherwise.

In that case, I would still disagree that we should use assembly in every case where we need lower-level expression instead of a low-level language that is higher-level than assembly. Every situation is different and we should use the semantics that best fit the scenario. I would agree with the point of using higher-level languages for scenarios that require higher-level semantics as the main priority.

I would also agree that the higher-level languages we have today do lead to bloat because they are all lacking what I have explained upthread. That is why I am and have been for the past 6 years contemplating designing my own programming language. I also would like to unify low-level and higher-level capabilities in the same language. Rust has advanced the art, but is not entirely what I want.

TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
April 12, 2016, 02:22:35 AM
 #971

Rust appears to have everything except the first-class intersections and also the compilation targets and modularity of compilation targets may be badly conflated into the rest of the compiler (haven't looked, but just wondering why the fuck they didn't leverage LLVM?).

My presumption was incorrect. Apparently Rust outputs to the LLVM:

https://play.rust-lang.org/?code=fn main() {
    println!("Hello, world!");
}

AlexGR
Legendary
*
Offline Offline

Activity: 1708
Merit: 1049



View Profile
April 12, 2016, 04:45:58 AM
 #972

AlexGR, it is possible that what you were trying to say is that C is too low-level to be able to reduce higher-level semantic driven bloat and that C doesn't not produce the most optimum assembly code, so might as well use assembly for low-level and a higher-level language otherwise.

In that case, I would still disagree that we should use assembly in every case where we need lower-level expression instead of a low-level language that is higher-level than assembly. Every situation is different and we should use the semantics that best fit the scenario. I would agree with the point of using higher-level languages for scenarios that require higher-level semantics as the main priority.

I would also agree that the higher-level languages we have today do lead to bloat because they are all lacking what I have explained upthread. That is why I am and have been for the past 6 years contemplating designing my own programming language. I also would like to unify low-level and higher-level capabilities in the same language. Rust has advanced the art, but is not entirely what I want.

What I'm trying to say is something to that effect: We have to differentiate between
a) C as a language in terms of what we write as source (the input so to speak)
and
b) the efficiency of binaries that come out from the compiler.

The (b) component changes throughout time and is not constant (I'm talking about the ratio of performance efficiency versus the hardware available at each given moment).

When C was created back in the 70's, every byte and every clock cycle were extremely important. If you wanted to write an operating system, you wouldn't do that with a new language if it was like n times slower (in binary execution) than writing it in assembly. So the implementation, in order to stand on merit, *HAD* to compile efficient code. It was a requirement because inefficiency was simply not an option. 1970 processing power was minimal. You couldn't waste that.

Fast forward 45 years later. We have similar syntax and stuff in C but we are now leaving a lot of performance on the table. We are practically unable to produce speed-critical programs (binaries) from C alone. A hello world program might be of a similar speed with a 1980's hello world, but that's not the issue anymore... Processors now have multiple cores, hyperthreading, multiple math coprocessors, multiple SIMD units, etc. Are we properly taking advantage of these? No.

Imagine trying to create a new coin with a new hash or hash-variant that is written in C. You can almost be certain that someone else is doing at least 2-3-5x in speed by asm optimizations and you are thus exposed to various economic and mining attack vectors. Why? Because the compilers can't produce efficient vectorized code that properly uses the embedded SIMD instruction sets. Someone has to do it by hand. That's a failure right there.

I'm not really proposing to go assembly all the way. No. That's not an option in our era. What we need is far better compilers but I don't think we are going to get them - at least not from human programmers.

So, I'm actually saying the opposite, that since C (which is dependent upon the compilers in order to be "fast") is now fucked up by the inefficient compilers in terms of properly exploiting modern hardware (and not 3-4 decades old hardware - which it was good at), and is also "abrasive" in terms of user-interfacing / mentality of how it is syntaxed/constructed, perhaps using friendlier, simpler and more functional languages while making speed concessions may not be such a disaster, especially for non-speed critical apps.
smooth
Legendary
*
Offline Offline

Activity: 2968
Merit: 1198



View Profile
April 12, 2016, 05:03:30 AM
 #973

@AlexGR I'm still puzzled at what the hell you are going off about. It seems almost entirely strawman to me.

Very few new applications are being written in C afaik (and I do have somewhat of an idea). It is only being done by a few fetishists such as jl777.

Most development now is being done in higher level languages which are exactly what you just proposed: a small (maybe) speed concession for greater expressiveness and friendliness (though that word is quite vague) and in some sense (though not in others) simpler than C.  The odd exception here is C++. Not surprisingly C++ is very much on the way out.

The language that is eating (almost) everything is JavaScript, so if you want to comment on the current state of languages I would start there, not with something that is relegated to being largely a legacy tool (other than low level). Of course other languages are being used too, so they're also worthy of comment, but ignoring JavaScript and fixating on C is insane.
TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
April 12, 2016, 07:47:45 AM
Last edit: April 12, 2016, 08:07:32 AM by TPTB_need_war
 #974

Most development now is being done in higher level languages which are exactly what you just proposed: a small (maybe) speed concession for greater expressiveness and friendliness (though that word is quite vague) and in some sense (though not in others) simpler than C.  The odd exception here is C++. Not surprisingly C++ is very much on the way out.

C++ is a painful, fugly marriage of high-level and low-level, which loses the elegance of high-level semantics. I think it hasn't died because no other language could do C low-level combined generics (templates). Perhaps something like Rust (perhaps with my ideas of extensions to Rust) will be the death blow to C++ and also Java/Scala/C# (probably Python, Ruby, PHP, and Node.js as well). And also perhaps Javascript...

The language that is eating (almost) everything is JavaScript

Which appears to me to be a great tragedy of inertia (it was originally designed to be for short inline scripts for simple DHTML on web pages ... then Google Maps and webmail clients arrived...). I am in the process of attempting to prove this.

I believe there is no reason we can't marry high-level, low-level, static typing, and fast JIT compilation and still be able to write programs with a text editor without tooling (for as long as tooling is in the debugger in the "browser").

This low degrees-of-freedom crap of being forced to fight with bloated IDEs such as Eclipse needs to be reverted.

This dilemma of needing to place the modules of your project all in the same Git repository needs to be replaced with a good package manager which knows how to build from specific changesets in orthogonal repositories, where the relevant changeset hashes from the referenced module are accumulated in this referencing module so that merging DVCS remains sane. In other words, the package manager (module system) needs to be DVCS aware.

smooth
Legendary
*
Offline Offline

Activity: 2968
Merit: 1198



View Profile
April 12, 2016, 08:08:02 AM
 #975

it was originally designed to be

That often does not matter.

Java was designed to display games and animated advertisements on web pages. It hardly ever did that and instead became the dominant language for enterprise software, a huge pivot.

Such attempts at central planning rarely succeed. (Most languages designed to be ____ end up being used for nothing at all, of course.)

Build something interesting and let it free to find its place.

Quote
In other words, the package manager (module system) needs to be DVCS aware.

Nothing wrong with that, but most people just reference named releases.
TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
April 12, 2016, 08:21:02 AM
Last edit: April 12, 2016, 08:33:05 AM by TPTB_need_war
 #976

it was originally designed to be

That often does not matter

I sort of disagree with you at least in terms of the exemplary examples of greatest success if not entirely disagree.

C was definitely designed for low-level operating systems programming and it was the most popular language ever bar none because it provided just the necessary abstraction of assembly needed for portability and no more. For example, it didn't try to make pointers safe or other structure.

Haskell was designed to express math in programming tersely without extraneous syntax and thus the very high priority placed on global type inference, which is why they haven't implemented first-class disjunctions (aka unions) that happens to be the one feature missing from Haskell that makes it entirely unsuitable for my goals. Haskell forsaked practical issues such as the non-determinism of lazy evaluation and its impact on for example debugging (e.g. finding a memory leak). Haskell is by far the most successful language for its target use case.

Javascript was designed to be lightweight for the browser because you don't need a complex type system for such short scripts, and thus you can edit it with a text editor and don't even need a debugger (alert statements worked well enough). Javascript is the most popular language for web page scripts. But the problem is web pages changed to Apps and Javascript was ill designed for this use case. Transcoding to Javascript is a monkey patch that must die because it is a non-unified type system!

PHP was designed to render LAMP web pages on the server and became the most popular language for this use case, except the problem is server Apps need to scale and PHP can't sale, thus Node.js and Java, etc..

Java was designed write once, run everywhere, and it gained great popularity for this use case, but the problem was Java has some stupid design decisions which made it unsuitable as the language to use every where for applications.

C# was designed to be a better Java to run on Microsoft .Net and thus it died with .Net.

C++ was designed to be a marriage of C and OOP and it is dying with the realization that OOP (subclassing) is an anti-pattern and because C++ did not incorporate any elegance from functional programming languages.

All the other examples of languages had muddled use-cases for which they were designed and thus were not very popular.

So I think it is definitely relevant what a language was designed for.


smooth
Legendary
*
Offline Offline

Activity: 2968
Merit: 1198



View Profile
April 12, 2016, 08:35:44 AM
Last edit: April 12, 2016, 08:57:04 AM by smooth
 #977

it was originally designed to be

That often does not matter

I sort of disagree with you at least in terms of the exemplary examples of greatest success if not entirely disagree.

C was definitely designed for low-level operating systems programming and it was the most popular language ever bar none because it provided just the necessary abstraction of assembly needed for portability and no more. For example, it didn't try to make pointers safe or other structure.

C was most certainly not designed as a language for all sorts of business applications, Windows desktop applications (of course such a thing did not exist at the time it was designed, but conceptually it was not intended for that), etc. as was most of its usage when it became the most popular language.

Quote
Haskell

Has never really been widely used for anything (at least not yet) so irrelevant to my point.

Quote
But the problem is web pages changed to Apps and Javascript was ill designed for this use case.

And yet it is widely used for this (and growing rapidly for all sorts of uses), supporting my point.

Quote
PHP

Good counterexample.

Quote
Java was designed write once, run everywhere

Somewhat. It was designed for write-once run everywhere on the web (originally interactive TV and maybe PDAs since the web didn't quite exist yet, but the same principle). None of the business software use case where it became dominant was remotely part of the original purpose (nor do these business software deployments commonly even make use of code that is portable and "runs everywhere").

Quote
C# was designed to be a Java to run on Microsoft Net and thus it died with Net.

C# is not dead, it is still quite popular for Windows development (and some cross-platform development). But as you say it was designed to copy Java after Java was already successful so in that sense it is a counterexample, but a "cheat" (if you copy something that has already organically found a strong niche, chances are your copy will serve a similar niche).

Quote
So I think it is definitely relevant what a language was designed for.

Sometimes, but often not. It is of course relevant, it just doesn't mean that what it will end up being used for (if anything) is the same as what it was designed to be used for.

Further examples:

Python: designed for scripting, but is now the most used (I think) language for scientific and math computing (replacing fortran). Of course it is used for other things too, but most are also far removed from scripting.

BASIC: Designed for teaching, but became widely used for business software on minicomputers, and then very widely used for all sorts of things on PC.

COBOL and FORTRAN: counterexamples; used as designed.

Perl: had a bit of a golden age beyond its scripting purpose, but mostly seems a counterexample, often used as intended

Ruby: Not really sure about this one. Seems to be getting used for quite a few things now, mostly web development of course, but some others. This one is mostly off my radar so I don't really know the history.
TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
April 12, 2016, 09:27:44 AM
Last edit: April 12, 2016, 11:36:30 AM by TPTB_need_war
 #978

it was originally designed to be

That often does not matter

I sort of disagree with you at least in terms of the exemplary examples of greatest success if not entirely disagree.

C was definitely designed for low-level operating systems programming and it was the most popular language ever bar none because it provided just the necessary abstraction of assembly needed for portability and no more. For example, it didn't try to make pointers safe or other structure.

C was most certainly not designed as a language for all sorts of business applications, Windows desktop applications (of course such a thing did not exist at the time it was designed, but conceptually it was not intended for that), etc. as was most of its usage when it became the most popular language.

The reason it became popular is because of what I wrote before bolded, underlined above for emphasis. The reason C++ become popular is because it promised those benefits plus OOP.

I know because I coded WordUp in assembly then switched to C by version 3.0 and was amazed at the increase in my productivity. And the delayed switch away from assembly afair revolved around which good compilers were available for the Atari ST. Then for CoolPage I chose C++ for precisely that underlined reason and the integration with MSVC++ (Visual Studio) Model View framework for a Windows GUI application.

C was adopted for a use case for which it is not the best suited, because there was nothing better around and thus it was the best suited at that time. Language technology was very immature at that time. There was more out there as research which hadn't yet been absorbed into mainstream languages and experience.

We are in a different era now where we have a lot of experience with the different programming paradigms and programming language design is an art now of being able to distil all the existing technology and experience in the field to an ideal design.

Haskell

Has never really been widely used for anything (at least not yet) so irrelevant to my point.

Apparently you did not read my implied point that the mass market is not the only possible target market a language is designed for. As I wrote before, Haskell was design for an academic, math geek market. It is very popular and extremely successful within that tiny market.

It will be very difficult for you to find an Ivy league computer science graduate who hasn't learned Haskell and who doesn't wish to use it on some projects. That is what is called mindshare. Any language hoping to be the future of multi-paradigm has to capture some of that mindshare. Which Scala attempted to do, but it unfortuately bit down too hard on that anti-pattern subclassing.

But the problem is web pages changed to Apps and Javascript was ill designed for this use case.

And yet it is widely used for this (and growing rapidly for all sorts of uses), supporting my point.

And crashing my browser and transcoding hell of a non-unified typesystem which is headed to a clusterfuck. I have recognized how the Internet browser "App" will die. I will finally get my "I told you so" (in a positive way) on that totalitarian Ian Hickson and that rigor mortis of totalitarianism at the W3C.

Java was designed write once, run everywhere

Somewhat. It was designed for write-once run everywhere on the web (originally interactive TV and maybe PDAs since the web didn't quite exist yet, but the same principle). None of the business software use case where it became dominant was remotely part of the original purpose (nor do these business software deployments commonly even make use of code that is portable and "runs everywhere").

Clearly Sun Microsystems was trying to disrupt Microsoft's challenge in server computing and so Java was designed to co-opt Microsoft Windows on the general client so Microsoft couldn't dictate their servers on everyone. What Sun didn't anticipate was Linus Torvalds and Tim Berners-Lee.

C# was designed to be a Java to run on Microsoft Net and thus it died with Net.

C# is not dead, it is still quite popular for Windows development (and some cross-platform development).

Windows is a dead man walking. Cross-platform, open source .Net (forgot the name) isn't gaining momentum. One good language to succeed the current crop, will put the final nail in the coffin.

But as you say it was designed to copy Java after Java was already successful so in that sense it is a counterexample, but a "cheat" (if you copy something that has already organically found a strong niche, chances are your copy will serve a similar niche).

It was an attempt to defeat Sun's strategy of co-opting the client. And it helped to do that, but the more salient disruption came from the Internet browser, Linux, and LAMP.

So I think it is definitely relevant what a language was designed for.

Sometimes, but often not. It is of course relevant, it just doesn't mean that what it will end up being used for (if anything) is the same as what it was designed to be used for.

I think it is much more deterministic as explained above. The programming language designer really needs to understand his target market and also anticipate changes in the market from competing technologies and paradigm shifts.

Further examples:

Python: designed for scripting, but is now the most used (I think) language for scientific and math computing (replacing fortran). Of course it is used for other things too, but most are also far removed from scripting.

The challenge we've had since graduating from C, has been to find a good language for expressing higher-level semantics. As you lamented upthread, all attempts have improved some aspects while worsening others. Python's main goal was to be very in tune with readability of the code and the semantics the programmer wants to express.

Thus it became a quick way to code with a high degree of expression. Even Eric Raymond raves about that aspect of Python.

But the downside of Python is the corner cases because afaik the type system is not sound and unified.

So it works for small programs but for large scale work it can become a clusterfuck of corner cases that are difficult to optimize and work out.

So I am arguing that Python was not designed solely for scripting but also for programmer ease-of-expression and readability and this has a natural application for some simpler applications, i.e. that aren't just scripts. This seems to have been a conscious design consideration.

BASIC: Designed for teaching, but became widely used for business software on minicomputers, and then very widely used for all sorts of things on PC.

Because that is what programmers were taught. So that is what they knew how to use. That doesn't apply anymore. My first programming was in BASIC on an Apple II because that was all that was available to me in 1983. I messed around some years before that with a TRS-80 but not programming (because I didn't know anyone who owned one and I could only play around with it for a few minutes at a time in the Radio Shack store). My first exposure to programming was reading the Radio Shack book on microprocessor design and programming when I was 13 in 1978 (afair due to being relegated to my bed for some days due to a high ankle sprain from football). Perhaps that is why I can program in my head, because I had to program only in my head from 1978 to 1983.

I must mea culpa that around that age my mother caught me (as we were leaving the mall) having stolen about $300 of electronic goods from inside the locked glass display case of Radio Shack. I had reached in when the clerk looked the other direction. (Later in teenage hood I became an employee of Radio Shack but I didn't steal). I had become quite the magician and I was eager to have virtually everything in Radio Shack to play with or take apart for parts (for my various projects and experiments in my room). She made me return everything to the store, but couldn't afford to buy any of it for me apparently. I think we were on food stamps and we were living in poverty stricken neighborhoods for example in Baton Rouge where my sister and I were the only non-negro kids in the entire elementary school. Then my mom got angry when my sister had a black boyfriend in high school.  Roll Eyes

COBOL and FORTRAN: counterexamples; used as designed.

Agreed.

smooth
Legendary
*
Offline Offline

Activity: 2968
Merit: 1198



View Profile
April 12, 2016, 09:46:49 AM
Last edit: April 12, 2016, 10:08:12 AM by smooth
 #979

C was adopted for a use case for which it is not the best suited, because there was nothing better around and thus it was the best suited at that time. Language technology was very immature at that time. There was more out there as research which hadn't yet been absorbed into mainstream languages and experience.

On this I agree. That is, at a time long after it was designed, for purposes very different from those for which it was designed, it became the best (i.e. least bad; nothing better) available alternative.

Quote
We are in a different era now where we have a lot of experience with the different programming paradigms and programming language design is an art now of being able to distil all the existing technology and experience in the field to an ideal design.

On this I tend to disagree. Witness JavaScript.

It is very difficult, if not impossible, to centrally plan what the marketplace will, in its organic and evolutionary wisdom of how to weigh between conflicting factors, consider "ideal" at any particular point in time, particularly a point in time distant from the period of initial design (which is still very much happening today; again consider Javascript, or Python, or Ruby -- these are all "old" languages that gained their recent popularity long after their design).

Quote
Clearly Sun Microsystems was trying to challenge Microsoft in server computing and so Java was designed to co-opt Microsoft Windows on the general client so Microsoft couldn't dictate their servers on everyone. What Sun didn't anticipate was Linus Torvalds.

This is chronologically and strategically wrong in a lot of ways, but that is off topic so I'll not elaborate now.

Quote
C# was designed to be a Java to run on Microsoft Net and thus it died with Net.

C# is not dead, it is still quite popular for Windows development (and some cross-platform development).

One good language to succeed the current crop, will put the final nail in the coffin.

Heh, JavaScript.

It does seem more likely for the time being that the sort of dominance that was achieved by C and Java in their respective peaks won't be repeated any time soon. We'll probably continue to see more fragmentation of multiple languages being used, continuing until there is a paradigm shift that brings significant productivity gains without high costs. Examples of this were C's low-cost of adding abstraction over asm or Java's low-cost of adding runtime safety (and the OOP fad) over C (or C++, especially in its earlier forms without smart pointers).

I don't see that anywhere today. There are identifiable advantages to be had over the current market frontier leaders, but they all come with high costs.
TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
April 12, 2016, 10:11:41 AM
Last edit: April 12, 2016, 10:38:21 AM by TPTB_need_war
 #980

C was adopted for a use case for which it is not the best suited, because there was nothing better around and thus it was the best suited at that time. Language technology was very immature at that time. There was more out there as research which hadn't yet been absorbed into mainstream languages and experience.

On this I agree. That is, at a time long after it was designed, for purposes very different from those for which it was designed, it became the best (i.e. least bad; nothing better) available alternative.

C was designed to be exactly what it was used for, one step above the metal with portability abstraction.

That business applications didn't know they needed better (because better wasn't available), is irrelevant to the point of a language only being best for what it was designed for, which is my point. C died as a business application language very quickly because it was not best for that use case.

C was employed for a use case where it was not best because it had sort of a temporary monopoly due to circumstances.

Please make sure you understand that key distinction because I think it will be crucial for understanding about what is about to happen to Javascript.

We are in a different era now where we have a lot of experience with the different programming paradigms and programming language design is an art now of being able to distil all the existing technology and experience in the field to an ideal design.

On this I tend to disagree. Witness Javascript.

It is very difficult, if not impossible, to centrally plan what the marketplace will, in its organic and evolutionary wisdom of how to weigh between conflicting factors, consider "ideal" at any particular point in time.

Emphatically disagree. Everything the market has done has been entirely rational. Javascript is only being widely adopted because the Internet browser has an undeserved monopoly. Breaking that monopoly is going to open Javacript to the realization that Javascript is not the best language for the Apps people are using it for.

If Rust adds first-class asynchronous programming, and someone disrupts the Internet browser, this could be a very rapid waterfall collapse for Javascript.

Clearly Sun Microsystems was trying to challenge Microsoft in server computing and so Java was designed to co-opt Microsoft Windows on the general client so Microsoft couldn't dictate their servers on everyone. What Sun didn't anticipate was Linus Torvalds.

This is chronologically and strategically wrong in a lot of ways, but that is off topic so I'll not elaborate now.

I was around then and I remember the first magazine articles saying the VM could be implemented in hardware and thus it was going to be a new kind of computer or device. Again this was all targeted at Microsoft's growing monopoly which was threatening to cannibalize Sun's market from the bottom up. So Sun decided to attack at the bottom, but changed strategy mid-stream as some of the expectations didn't materialize.

C# was designed to be a Java to run on Microsoft .Net and thus it died with .Net.

C# is not dead, it is still quite popular for Windows development (and some cross-platform development).

One good language to succeed the current crop, will put the final nail in the coffin.

Heh, Javascript.

No Javascript can't disrupt the non-browser Apps entirely, because it is not a best fit to applications. It is a scripting language. And transcoder hell is not a solution, but rather a stopgap monkey patching. It has all sorts of cascaded tsuris that will explode over time especially for working in decentralized development.

It does seem more likely for the time being that the sort of dominance that was achieved by C and Java in their respective peaks won't be repeated any time soon.

I disagree. The Internet browser and the W3C monopoly (oligarchy) has been standing in the way of a better general purpose programming language.

I understand this brought together huge economies-of-scale, but unfortunately they've been squandered on an ill fit design that is designed to funnel everything through the browser. Then we have Apple and Android trying to funnel everything through App stores. The choke points are the problem.

We'll probably continue to see more fragmentation of multiple languages being used, continuing until there is a paradigm shift that brings significant productivity gains without high costs. Examples of this were C's low-cost of adding abstraction over asm or Java's low-cost of adding runtime safety over C.

I don't see that anywhere today. There are identifiable advantages to be had over the current market frontier leaders, but they all come with high costs.

I see it. Stay tuned.

Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 [49] 50 51 52 53 54 55 56 57 58 59 60 61 62 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!