Quote:
Originally Posted by jjshabado
One more point about abstractions - CS is a very Darwinian world. If an abstraction (be it a language, framework, library, whatever...) isn't making it easier for some group of people - it'll die (or live a small little life as a cool/intriguing/academic thing that people pull out in forums). There's a reason nobody is using Brain**** to write applications.
On a related note - what my company is trying to do is to basically build a new abstraction layer for engineers to use when working with big data. It's very cool (imho) but if we don't actually make things easier for people - we'll go out of business. In fact one of the most important pieces of feedback that we try to get is when do people stop using our abstraction layer and go down a level. If they're skipping our product and going down - it means we're missing something important in our abstraction.
This was pretty much my point. Your team saw a serious issue with the status quo, so you decided to roll back the abstractions to improve the process. I assume you are creating a new item from the (sort of) ground up because adding an extra layer to the tools out there wouldn't do anything useful. I'm saying that what your team is doing is progress.
I think of it the same as an engineer. Someone can use black-box abstractions to create the diagram for a bridge, but most certainly would prefer it if said engineer knew about statics. Even if said engineer never wrote a differential again, it still is nice to know that my life is in the hands of people who understand the *why* of the math and when to use said math. When geometry isn't enough, I would hope they use the correct formulas when they plan and design a bridge and don't decide to take away a few poles because it will look more beautiful. I understand that programming isn't (usually) a life and death situation, but still. People's lives depend on programs functioning correctly more than ever before, yet people are fudging things up left and right via bandwagon on the latest _____ craze.
Something more mundane: If people didn't break abstractions, we'd all still be styling in '57 Chevy's with no internal computers, large gas-guzzling engines, etc, or, if they didn't break abstractions, our computer chips would be the size of the Empire State Building.
It's interesting that you bring up circuits. No, I have not exactly dived into circuits, but I have had the (dis)pleasure of writing circuit programs and I'll argue that there are tons of lessons that are learned from this exercise that are low-level representations that seep well-into high-level languages. Here, from 3 basic circuits, you can represent so many things! I'll try to do the following from memory, so forgive mistakes:
You start with 4 items: And Gate, Or Gate, Inverter, Wire.
Then with a few basic concepts:
And Gate takes two wires, and one output and set up the following truth:
If input A == input B, return the value of A, otherwise, return 0
Or Gate takes two wires and generates one output.
If input A == input B, return the value of A, otherwise, return 1
Inverter takes one wire and generates one output.
Take the input A and return the opposite of A.
Very basic so far.
Now add an Inverter to the output of Or Gate to create Nor Gate. Add and inverter to And Gate and create a Nand Gate.
Now combine the output of an Or Gate with an inverter on the carry and create a Half Adder. Combine two Half Adders to create a Full Adder. Then with just 4 primitives, you can represent many things and even combine these to create adders of (theoretically) infinite size. This combination of primitives -> abstractions is a very good exercise, wouldn't you agree?
Then how to move all that chuff to higer-level stuff?
Think about the decisions inherent to the program: do you need to track the state of the wires? If so, how? If not, do you ever have to specify the wires themselves? If you only need to see the output, then why bother specifying a wire at all?
Moving along, we see there are multiple ways to represent such items:
Full Adder -> Full Adder -> Inverter
Half Adder -> Full Adder -> Inverter
Inverter -> Half Adder -> Full Adder
And alas, we see a pattern here. Can the system, if represented some other way be abstracted? What about
:
Multiply -> Append -> Sum All
Divide -> Map -> Multiply All
Divide by PI -> Append -> Print All
And there is yet another common pattern that can be abstracted into a generalized function. This goes beyond the idea of if you find yourself copy-pasting, it's time to refactor. This says that even if you are using the same patterns over and over again, even if your program follows DRY to a T, it's time to refactor.
How would anyone be able to see these patterns unless there was a very simplistic, non-abstract, and low-level model to begin with? I know the above takes more preponderance than (--install) or slopping an answer into map(), but this knowledge is stuff I think you are taking for granted that is wholly unfamiliar to people who "abstraction" is a foreign term.
With all that said, I'm not saying abstractions are inherently bad, as I most certainly don't agree with this, but I think there is something to be said about at least being able to visualize the abstraction layers you are building on.