![]() In practice that would mean that a naive for-loop on individual bytes written in C would run the same speed as a highly accelerated shader, because the compiler would optimize the intermediate code (i-code) into its dependent operations and distribute computation across a potentially unlimited number of cores, integrating the results to exactly match a single-threaded runtime. More like how a Star Trek computer might run. Whereas we could have the "simple" MIMD transputer I've discussed endlessly in previous comments that just reconfigures itself to run anything we want at the maximum possible speed. Cross-platform compilation, emulation, forced hardware upgrades that ignore Turing completeness, vendor/platform lock-in and planned obsolescence are all symptoms of today's "easy" status quo. The idea being that computers should do whatever it takes to execute users' instructions, rather than forcing users to adapt their mental models to the hardware/software. GPUs, FPUs and even hyperthreading would go away in favor of microcode-defined types and operations on arbitrary bitfields, more akin to something like VHDL/Verilog running on reprogrammable hardware. For example, intrinsics and manual management of memory layout under SSE/Altivec would be substituted for generalized (size-independent) vector operations on any type with the offsets of variables within classes/structs decided internally. Whereas the "simple" route would solve actual problems within the runtime so that we can work at a level of abstraction of our choosing. This path inevitably leads to cookie cutter programming and copypasta, causing software to have a canned feel like the old CGI-BIN and Flash Player days. ![]() The graphics driver we're using may not provide such basic types as GL_LINES. We're limited in the number of layers we can draw or the number of memory locations we can read/right simultaneously (like how old arcade boxes only had so many sprites). Then we have to manage pixel buffers ourselves. So we're told that the latency between CPU and GPU is too long for old-fashioned C-style programming. Most paradigms today go the "easy" route, providing syntactic sugar and similar shortcuts to work within artificial constraints created by market inefficiencies like monopoly. It comes down to the easy vs simple debate. Instead we got proprietary DSP/SIMD GPUs with esoteric types like tensors that require the use of display lists and shaders to achieve high performance. I want direct access to the hardware with stuff like $100 1GHz 100+ core CPUs with local memories and true multithreaded languages that use immutability and copy-on-write to implement higher-order methods and scatter-gather arrays. This might be my biggest disappointment with "modern" programming. ![]() One thing from back then that I really miss is how easy it was to do some complex things. But unlike today we didn't have to learn SFML or or activex or opengl to just start playing and get stuff working, we could just call circle. And if we knew more computer science and software engineering we would have done more probably, and it would have worked better. This was all pretty tedious and everything, but it was a lot of fun to high schoolers. Some friends and I we got together and made a final fantasy like game using these tricks, hand crafted sprites and a game world that you could walk across, every map was a whole screen, and you would go to an adjacent map when you hit the sides, and every step if you had a natural 1 you would go into a fight with some enemies. So making little sprites that looked like they were running made out of only 1000 lines of C++ code was awesome. I was able to draw to the screen by just calling geometric shape functions, and it made pictures! I found out that if I drew a shape, then called the xor function on the shape a new but sligtly different shape, I could make animations. I learned to program in Borland Turbo C++.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |