Quote:
Originally Posted by candybar
Are you thinking like Lisp machines or Java bytecode type instruction sets for actual CPUs? The whole CISC via microprogramming is driven by the exact opposite trend. In the 80's and 90's, people used to care much more about elegant, nice instruction sets that are easier for compiler writers to reason about. Then everyone gave up and now it's only about best-case performance, features, compatibility and battery life, which is why we have instruction sets that are too ugly even for the hardware guys to handle. Also, compiler construction has also become significantly more modularized so compiler writers for high-level languages don't even target CPU instructions anymore, which has killed the pipe dream that all-knowing super-smart compilers for high-level languages can one day write better machine code than C compilers because they have more end-to-end context.
Are we then stuck writing C for performance-critical components forever? I think the best hope we have for allowing high-level programming in that situation is code generation. Languages like OCaml and Haskell are excellent for writing code generators. It's getting to a point where for certain types of software that has complex requirements that are difficult to directly code in low-level languages, but has performance requirements that cannot be met otherwise, the best approach may be to write a program in Haskell or OCaml that generates C source code. We don't have good tools for this approach yet, but it's already been done: http://en.wikipedia.org/wiki/FFTW. The challenge, today, is that you have to know know both languages and have a decent background in compiler-writing/code-generation. With some advanced libraries, this could change, and people may be able to routinely write C libraries in Haskell.
Not thinking Lisp machines or byte code machines.
The RISC vs CISC "issues" came about due to studies that indicated that vast majority of processor execution time was spent executing a very small subset of available instructions from the set of available CISC instructions. So, to make a long story short, RISC processors came back into vogue actually in the 80s. I think it is fair to say that today ARM architecture processors are dominant in the RISC processor domain. Yes, generally speaking, RISC has less power consumption than CISC although I can tell you for certain that Intel is sinking a lot of $ into managing power consumption on it's x86 based CISC System on Chips (SoCs) and they are having a great deal of success. ARM based SoCs have also reduced their power consumption a lot at the same time.
The complexity you refer to is driven by two things in my view, overcoming the memory bottle neck and the need/desire for computer systems to handle increasingly complex activities. Caching, branch prediction logic, pipelining, multicore chipsets, ever increasing clock speeds etc. are aimed at achieving those goals. I see those trends continuing for a long time to come.
When I see a complete 4 core 64 bit x86 system, including a high speed graphics engine, on a chip that is the size of my thumb nail I am in awe. I think it is fair to say that in software development managing complexity is a significant challenge. The easier it becomes to manage complexity the better and in my view the continuing and constant evolution of programming languages takes direct aim at making managing complexity easier and will facilitate doing even more complex tasks. I see hardware evolving as well to accommodate the software trends. Exactly how? Not sure but I have to believe that the continuous evolution that we've seen in miniaturization and speed will lead to a scaling up so to speak of the underlying "machine language" to accommodate the evolution of programming languages.