Quote:
Originally Posted by well named
This isn't intended to be a comment about you, but just about the question of whether a front end dev should be familiar with computational complexity in general, or about the relationship between dev value and the kind of typical CS stuff that comes up in interviews:
I think I might have had a similar-ish career path to you. I don't have a CS degree, I'm now a little older senior dev type, I often think my practical experience is more valuable than certain CS knowledge, but maybe sometimes I'm wrong about that, etc.
That said, in my career I've done plenty of front-end work (not all web, or JS; and also plenty of back-end work) where performance of algorithms dealing with complex collection-like data structures mattered, and often times more than originally anticipated as a product/company scaled. I do expect a senior dev to be aware of the performance implications of algorithms, and to be able to implement logic efficiently, being thoughtful about when it matters and when it doesn't. It's not always important, but it's important often enough to be important :P
What I don't necessarily care as much about is someone's mastery of the related jargon, or even being able to quickly articulate why some implementation is O(n log(n)) instead of O(n^2). Although it's clear that having the CS background makes it easier to communicate with other devs that share the background, and that has some value. And I expect having a deeper understanding is also useful. It's just that it's also possible to figure out a lot by trial and error over a longer career. I would probably say something similar about data structures. There are some super cute algorithms tricks where I'd imagine being aware enough to know when to google (vs. just writing your naïve version) is as valuable as being deeply versed in the arcana, but a good practical knowledge of many data structures and their tradeoffs -- even when you mostly just rely on standard libraries -- is helpful.
When you talk about "knowing O(n) inside and out" I'm not sure how much you are referring to the ability to speak the jargon and solve hypothetical math problems, or how much you are skeptical of the value of the practical skills which are related. I think the former is more defensible than the latter.
Yeah I think I'm thinking more about the jargon.
My particular situation is I've been a node dev for the last 5-6 years - in an environment where 99% of the time node is tapping it's fingers waiting for back end services to return. So for me code clarity, component scalability and scope flexibility have been pretty much all of my focus during that time.
Code clarity because we're working with offshore devs who might not be experts in closures or middleware and async programming in general. My biggest concern is them accidentally setting something in global scope - which is easy to do in node. Now you might say "Well that's a BS situation get better devs." But that's the world I lived in, so I made it work. And I think there's something to be said for the exercise of developing a framework that keeps devs out of trouble while still allowing them 100% freedom to build the feature they're trying to create.
Component scalability means that creating components 90-100 cause no more pain to the application than components 10-20. By the end we had some 200 components - which could be a full web page, a server-side rendered HTML snippet, or an AJAX rest call. To do this I purposely allowed some redundancy in that a lot of the components shared back end calls and back end logic. If we saw the same gnarly business logic appear more than once we'd just create a utility method. This goes against the standard paradigms of one layer or API services and another for front end components (in a many to many more relationship). But it worked perfectly for us.
I feel like I spent the first 10 years of my programming career trying to build the perfect abstraction, and since then learning when to back off for the sake of code clarity and future flexibility. Nothing is worse to refactor than a big multi-layered abstraction that you suddenly realize doesn't handle a new use case. I've been on projects like that. It's much easier imo to develop with redundancy and factor out obvious code reuse situations later.
Scope flexibility just means that even with 100+ components, it's still very simple to implement cross cutting concerns. I did this with everything being driven from a default properties object, and global middleware hooks at every step of the request/response chain, as well as every step of the back end api calls chain (which are many to one with request). When a new global or semi-global behavior was needed I would just add a new default property, then find the appropriate place(s) in the middleware chain to implement it. It was very simple for me to do, and very simple for other developers to pick up on.
Or since reporting always comes last, we can just snap on a piece of reporting middleware with hooks into the individual reporting components - all of which are superfluous to the actual feature code. Middleware lends itself really nicely to this, hierarchies often don't.
All of this stuff was planned ahead of time when I designed the framework, knowing our situation and the chaotic nature by which our requirements tended to mushroom and evolve.
So anyway it's not like I haven't been thinking deeply about my job the last 5 years. Maybe there's some reason I was highly valued by my company other than luck. I just haven't for the most part been in an environment where highly performant algorithms were needed.
The closest thing I can think of is when we debated whether we should read a hash of 50k zip codes into node's resident memory. We did a bunch of benchmarks and it didn't increase node's internal memory significantly or slow anything down.
Oh yeah, also I wrote custom middleware timers that showed how long each piece of middleware took on average. We did use this to debug code several times when external data sources were taking too long to return, or when middleware that wasn't going to an external data source was taking more than a few milliseconds. The one I remember is we were initiating moment timezone inside a loop that executed some 600 times. It took 2 seconds. We moved the initialization outside the loop and it went to a few millis.
I guess most of my perf debugging was on the macro level like that.
Last edited by suzzer99; 07-28-2018 at 03:27 PM.