Open Side Menu Go to the Top
Register
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** ** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD **

02-28-2014 , 01:30 AM
Quote:
Originally Posted by gaming_mouse
no one is forgetting that and it's not an argument.

"let's not forget that C is just an abstraction over assembly commands, so you don't gain anything in computing space...." etc.

readability, at heart, is about mapping code onto natural mental concepts. all the other stuff (simplicity, brevity, etc) falls out of that, and it's possible to have simple or concise code that is nonetheless difficult to read. what something like "map" gives you is the ability to capture the concept of "hey, i want to do *this* to all of these things, i don't care how." you are of course free to argue this point, but that is a lot closer to how i think naturally than "first i want to take the first thing, and do this to it, and then i want to take the second thing, and do this to it, and so on."
Heh, I meant to reply to this.

I agree, in part, but I disagree in many other parts, but this may be because I use a lot of SQL, so I keenly aware of the trade-offs between speed and readability.

For me, this

Code:
select pid, pname
from tableA
where pid not in
      (select pid
      from tableB);
is much more readable than

Code:
select pid, pname
from tableA ta
where not exists
      (select pid
      from tableB tb
      where ta.pid = tb.pid);
and in fact, any intro to databases class will teach you the first one. But, the second version is way faster than the first, and I'm not talking about a small distance. Where the first one may take 5 minutes, the second one will take under 5 seconds on the same data set.

How about this one?

Code:
select first_name, last_name
from my_table
order by last_name desc
limit 1;
It certainly isn't the way I think, and omg(!) look at that "order by" clause. Can't I do better? As far as I know, not really. I could do many other queries that reflect how I think, but the vast majority of them would perform worse.

This is, in my opinion, the danger of using a bunch of functional programming ideas. Since functional is so opaque, the danger lies in not really "seeing" what is happening underneath, and that could cause code that doesn't really reflect your actual thought.

Take the Clojure code I posted. Since you don't know the language, you wouldn't know that each (vec n) is a function, and served as a somewhat abstraction to (into [] n). You can't mutate in FP, so those (vec)'s are not coercions. But here is something you wouldn't know at all: while (vec) and (into []) resolve into the same thing, (vec) isn't as fast as (into []). (vec) just looks and mind-maps to thinking better, but in reality, it is the worse option and I probably shouldn't be using it in an n*n matrix equation.

The #() is another function. So, to build the matrix, I used a ton of functions, 3 that you may not have seen, and one for loop. When I look at that code, I expand all of that out in my brain, so I "see" all of the functions and 2 loops.

Quote:
Originally Posted by Shoe Lace
Dave, nice one. That is readable too. Although when I looked at it all I could think of was professor Sussman closing all of the parens with his chalk from the SICP lecture videos.
Honestly, I hate it. What's up with the (for x (range n) ...) and then that is the last we see of x?

I'm on a programming break, so I have free license to write bad code.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
02-28-2014 , 01:48 AM
dave, i dont think that long SQL example was needed to make the point that sometimes speed matters, and when it does you might have to sacrifice clarity and maintainability. i'd never argue against that either. but your default position should be prioritize readability over speed until your code proves too slow. pre-optimization root of all evil, etc.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
02-28-2014 , 02:08 AM
I haven't taken the MIT class, but just took a peek at some of the lectures and it looks pretty intense, with a lot of mathematical proofs and analysis.

I would say the Roughgarden MOOC is a lite version of MIT's course. He said that in his real Stanford class there are a lot of proofs and little coding. I don't think Sedgewick's course is directly comparable, since it's much less analytical, but has a bunch of useful practical advice and covers more ground. If I was starting out I would take Roughgarden first, then Sedgewick.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
02-28-2014 , 02:15 AM
Quote:
Originally Posted by gaming_mouse
dave, i dont think that long SQL example was needed to make the point that sometimes speed matters, and when it does you might have to sacrifice clarity and maintainability. i'd never argue against that either. but your default position should be prioritize readability over speed until your code proves too slow. pre-optimization root of all evil, etc.
I agree with you somewhat, but I'm on the fence about it in many instances. I think that the abstractions offered by many FP ideas, especially when baked into a normally non-FP language, often dissolves into "look ma! no loops" swagger and ultimately doesn't fit context, maintainability, or readability, to be honest. Just because you can read the nested maps better than nested loops doesn't mean you aren't doing n*n operations, and I often suspect that people don't realize this when they show off functional in language X.

As for SQL: it's a funny place where the readable answer is never the correct answer. Certainly merits the point, I think.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
02-28-2014 , 02:26 AM
I've been doing a bit of light reading this week. The book is "Gamification by Design." About 50% of the way through, and it is virtually a play-by-play of every strategy used by Stack Exchange.

Anyways, the book discusses some app called Foursquare. I never heard of it, but apparently you win random badges when you visit various local dives. I guess that, when you visit one place, you get this badge of (dis)honor:

Spoiler:


I'll admit it: I look much better than that guy.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
02-28-2014 , 02:32 AM
Quote:
Originally Posted by daveT

As for SQL: it's a funny place where the readable answer is never the correct answer. Certainly merits the point, I think.
the point you should take from that is its evidence of poor language implementation. you could write your own SQL interpreter which took the SQL that you find more readable and converted it to the SQL that executes fast. Or the vendors could just fix it, assuming there aren't other reasons for keeping the slow version.

Your conclusion, when confronted with a conflict between speed and readability, seems to be: "Readability isn't everything. I care about speed."

My conclusion is: "The language is broken. Fix it so you have both."
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
02-28-2014 , 03:01 AM
I don't know enough about the innards of SQL or why the standard is that way. I find most of it pretty logical in its own way.

Obviously I'm not talented enough to build my own SQL. Sure, I could write PL/pgSQL and views, if that is what you meant. Otherwise, I prefer using DSLs.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
02-28-2014 , 03:12 AM
I didn't mean it literally. It's just a different attitude that might influence your priorities differently is all.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
02-28-2014 , 03:19 AM
Fair enough. I just didn't understand the comment about the nested loops. I should trust you come from the appropriate place.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
02-28-2014 , 03:45 AM
Quote:
Originally Posted by tercet
Life Dilemma..

I am a Jr Web Dev(C#,HTML,CSS,JS) w/ 18 months on the job(36k), I started out pretty fresh but I have improved a lot over the 18 months. I want to possibly try something more challenging in the same field, but it doesn't look as if any opportunities for a promotion/raise will arise in my company anytime soon.

So I'm thinking of two plans to get a new job..
A)Grind out current job, work on a new portfolo, poker p/t on the side while looking for new job
B)Leave my current job, work on a new portfolio, and play poker 40hrs/week ~(grind out 5-8k a month) until I find a new job

Would a few months off look bad on a resume if I were to do plan B?
Option B sounds pretty dangerous. You're obviously not me but when I played poker full time I was basically "obsessed" with the game i.e. my brain was thinking about hands when I was not playing etc.
Was really hard to do anything else on the side.

Also: not sure how good you are at poker obviously but 5-8k/month sounds like a lot in the current environment of games if you dive in from playing part time/not at all.

If it's financially feasible I'd rather go with
C: Quit poker, grind out the job and invest the time you would have played poker in "career building". Identify areas where you think you should get better (often algorithms for self learners for example), pick up an interesting new language and finish a couple of projects and put them on github.
Since you do C# and JS I'm assuming JS is mostly filler. You could invest in getting great at JS i.e. use it as the sole language for a project. Getting better at databases is also something you could consider.
You could pick up one of the trending languages + web framework (Ruby+Rails, Python+Django) but I think the JS route is more fruitful. Alternatively you could pick up a language that will generate some interest in a job interview just because you know it (my guess these days it's Go or Erlang, possibly a Lisp or something functional depending on who you interview at)

----
dave: pretty shocked you haven't heard of foursquare it is/was one of the startup darlings. Quite a bit of buzz. I mostly make fun of my friends who use it for providing free data about their whereabouts.

Last edited by clowntable; 02-28-2014 at 03:52 AM.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
02-28-2014 , 09:00 AM
Dave, in your SQL examples isn't it up to the database platform to turn your readable but less efficient code into the more efficient path?

I remember some article on HN the other week about this where a guy compared a query on postgres, mysql, mssql and oracle. Oracle generated the best query plan given the current query which resulted in it completely destroying postgres and others in performance.

I think the moral of the story was you could get postgres to execute it as just as fast but it required a more complicated query to get the same plan.

I'm not really sure if functional vs non-functional language styles even play a performance role in web development. Maybe in the 0.00000000001% case?

For example if you decide to use map vs a for loop in a language that supports both then maybe the map version will only do 20 million iterations per second while the for loop can do 50 million iterations per second but does that matter when as soon as you introduce IO it drops to 300 iterations per second for both?

Or if you're only drawing 50 elements on the screen, does it really matter that one of them completes 2 nanoseconds faster than the other?
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
02-28-2014 , 09:35 AM
Quote:
Originally Posted by gaming_mouse
the point you should take from that is its evidence of poor language implementation. you could write your own SQL interpreter which took the SQL that you find more readable and converted it to the SQL that executes fast. Or the vendors could just fix it, assuming there aren't other reasons for keeping the slow version.

Your conclusion, when confronted with a conflict between speed and readability, seems to be: "Readability isn't everything. I care about speed."

My conclusion is: "The language is broken. Fix it so you have both."
I agree with your conclusion completely.

This and some of the earlier posts about processor architectures has me doing a fair amount of thinking about processor architectures of the future that will facilitate higher level, more readable code and run it even faster (with less power drain while we're at it).
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
02-28-2014 , 10:15 PM
Quote:
Originally Posted by clowntable
dave: pretty shocked you haven't heard of foursquare it is/was one of the startup darlings. Quite a bit of buzz. I mostly make fun of my friends who use it for providing free data about their whereabouts.
Honestly, I don't find the stuff interesting enough to care. I've heard of the super hits like Facebook, Twitter (I have accounts on neither), and Angry Birds, obviously. I've heard something about that Tinder app, SnapChat, and that other one that FB just bought for 16b (names slips my mind?) and Flappy Bird, but I've never bothered to use them. I don't find the concepts appealing or "disruptive", tbh.

Wanna guess how many social and games apps I have on my phone?
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
02-28-2014 , 11:05 PM
Shoe, I hope someone with more knowledge answers your questions here, but this is my answer:

Quote:
Originally Posted by Shoe Lace
Dave, in your SQL examples isn't it up to the database platform to turn your readable but less efficient code into the more efficient path?
Yes, I think it should, but it is hard to fault the implementation in all cases. Some (most) database schemas are far from normalized, so it wouldn't be possible to optimize for all queries on all data sets.

I think that PostgreSQL does a pretty decent job of optimizing queries overall, though. I have quite a few not-so-good queries that run fine. A classical example is using cartesian products where using inner joins would clearly be superior.

As far as I've been able to measure, these two queries run in about the same time:

Code:
select fn.uid, fn.first_name, ln.last_name
from fnames fn, lnames ln
where fn.uid = ln.uid;
Code:
select fn.uid, fn.first_name, ln.last_name
from fnames fn
inner join lnames ln
on fn.uid = ln.uid;
The first one uses (visually at least) a Cartesian product, which is basically like running two for loops, but the query optimizes it as an inner join, which is similar to running one loop.

Oddly, many beginners are told to use this one:

Code:
select distinct fn.uid, fn.first_name, ln.last_name
from fnames fn, lnames ln
where fn.uid = ln.uid;
It does resolve an output just fine and gets incredible speed gains compared to not using the "distinct" clause, but oh my....

Quote:
I remember some article on HN the other week about this where a guy compared a query on postgres, mysql, mssql and oracle. Oracle generated the best query plan given the current query which resulted in it completely destroying postgres and others in performance.
This wouldn't surprise me one iota for many reasons. The most obvious is that Oracle is built by a large corporation and has been around longer than PostgreSQL. PostgreSQL is, of course, well-built, but it is open source and built on passion. Just can't compete with the compilers on a project like this.

The next reason is that PostgreSQL works in the opposite fashion as Oracle. In PostgreSQL, you have to consider the proper hardware and software foundation, and thus it takes some dirty work to get the full potential if you are aiming to use micro benchmarks. Oracle compiles down to a Virtual Machine, which of course, is optimized to work with Oracle. Without information on how the machines are set up, these benchmarks don't mean much. This of course goes for MySQL and any other database that doesn't compile to a VM. This doesn't exactly disagree with the query optimization problem, but it is something to consider when looking at benchmarks.

The final caveat is that Oracle also demands optimizations, and they are often at odds with what you would do with other databases, so it can be cherry-picking queries to make a point (not making an accusation, but it sounds a bit ingenuous to me to extend one case to all cases). I can just as well pick a query that blows up an Oracle database to prove PostgreSQL is better at optimizing queries. A good book that explores various optimizations on different databases is Refactoring SQL Applications. The first thing the author write is "don't take the benchmarks as proof that one database is better than another." The takeaway is that there are too many variables to consider.

Quote:
I think the moral of the story was you could get postgres to execute it as just as fast but it required a more complicated query to get the same plan.
I guess I sort of answered this one above.

Quote:
I'm not really sure if functional vs non-functional language styles even play a performance role in web development. Maybe in the 0.00000000001% case?

For example if you decide to use map vs a for loop in a language that supports both then maybe the map version will only do 20 million iterations per second while the for loop can do 50 million iterations per second but does that matter when as soon as you introduce IO it drops to 300 iterations per second for both?

Or if you're only drawing 50 elements on the screen, does it really matter that one of them completes 2 nanoseconds faster than the other?
I fear that too many developers take this attitude and take it some odd point of no return. I have 30+ mbp internet, and many sites take forever to load and often cause my fan to start spinning. I run various programs to prevent resource downloads and even turn off site-defined fonts to speed up my browsing. I can't imagine the hell people on 5mbp are living with these days.

So, imagine how many times they said "meh" and now they have a crap site that is barely usable. Imagine the time it would take them to reverse-engineer a site to be in the top 80% speed once they are sitting at sub-50%?

As for writing an algorithm for some NxN matrix, I don't think you should eschew using faster algorithms. A major point, in my interpretation, is to assume that N can be extremely large.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
03-01-2014 , 12:35 AM
I hate when new people move into my apt building and take the same channel as my router and I start getting negative pings. GTFO my channel br0s
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
03-01-2014 , 09:31 AM
Dave, I wouldn't know what others do. I only take it to the point I described. I still pay huge attention to latency and the perceived load time of sites I create but now instead prematurely optimizing things that don't make sense to do I just look at the final results and take care of the low hanging fruit.

I mean, I'm using rails which is supposedly the slowest thing in the world but the sites I create usually score low 90s in those google/yahoo speed tests and routinely get 40-60ms response times in chrome's network tab while serving dynamic content from an ec2 micro instace that I ping 20ms to.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
03-01-2014 , 03:12 PM
So another db modeling question related to a for fun project.


Theres a TEAM, which has PROJECTs, which has things like CONTACTs.

The guy Im working with likes to cram team_id into pretty much every sub node, so like CONTACT will have a project_id AND a team_id. Im wondering if this is a positive, a negative, or a big who cares? Obviously you can query the project it belongs to, then query which team that project belongs to, so it feels like were giving it a weird 2 parent setup, but I really dont know the pros or cons.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
03-01-2014 , 03:24 PM
so basically, he wants to **** up your database structure because he's not good at writing join queries?

also, it seems like teams, projects, and contacts can vary independently. a contact, in particular, could belong to multiple projects. so with projects and contacts, at least, there should be a many to many relationship and a bindery table to implement it. that *may* not actual be the case for you guys, but that seems like the most natural model to me.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
03-01-2014 , 03:47 PM
I think the issue he/Im having since we're NoSql newbs is that if you are a user, you should only be able to see contacts that belong to a project, which belongs to your team. IE you cant see other teams projects. So you find what team the current user belongs to, then you need to find what team the contact belongs to, so I assume he figures its easier to just have that team_id sitting there in the contact object. I admit, when we get like 5 children deep, its getting a bit confusing/annoying finding the team.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
03-01-2014 , 04:13 PM
you didn't mention it was nosql before... my response was assuming a traditional relational db. i've not used nosql much so i can't speak to that.

separately from that, your explanation above is confusing to me. users belong to teams but contacts also belong to teams? the "team" concept implies to me a group within a business organization. "contact" implies people outside the businesses, ie, clients or potential sales leads, etc. you should probably explain what the business is generally and what the teams do and who these contacts are. it's hard to give advice otherwise.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
03-01-2014 , 04:23 PM
Yeah I don't know either. I'm seeing this in rails lingo so far:

Project has_many contacts
Contact belongs_to project

Project has_many teams
Team belongs_to project

But if he wants to answer questions like "show me all contacts who belong to xyz team" then I think a `through` relationship should exist so he can reach through the contact's project to pull out the team.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
03-01-2014 , 04:41 PM
Quote:
Originally Posted by gaming_mouse
you didn't mention it was nosql before... my response was assuming a traditional relational db. i've not used nosql much so i can't speak to that.

separately from that, your explanation above is confusing to me. users belong to teams but contacts also belong to teams? the "team" concept implies to me a group within a business organization. "contact" implies people outside the businesses, ie, clients or potential sales leads, etc. you should probably explain what the business is generally and what the teams do and who these contacts are. it's hard to give advice otherwise.
I really do need to work on being more clear. (And naming conventions!)

A team is the main object, Id prefer it be called Company but thats neither here nor there. Actually **** it, Im going to change it to Company right now.

Its contact management for a very specific media company. The company is the main object, which has users who work for the company. Users create projects. When they do, the project belongs to the Company that user belongs to. Project is essentially work to be done with another media company. The project name would generally speaking be the name of the company you are working with. The contacts are people inside of that "project".

Contacts belong to a project, 1 project to many contacts.
1 Company/Team owns many projects.
1 Company/Team has many users.

But yeah sorry for not clarifying about NoSql. I dont sleep much and thought I mentioned this recently but it was actually January lol. All of my experience is with MySql and Oracle so this would be much easier there, but Im trying to expand my horizons so for some reason I picked a language Id never used and a DB id never used.

Quote:
Originally Posted by Shoe Lace
Yeah I don't know either. I'm seeing this in rails lingo so far:

Project has_many contacts
Contact belongs_to project

Project has_many teams
Team belongs_to project

But if he wants to answer questions like "show me all contacts who belong to xyz team" then I think a `through` relationship should exist so he can reach through the contact's project to pull out the team.
This through thing sounds interesting, thats pretty much exactly what im looking for. Ive never heard of that term though, can you elaborate? Right now Im using parent references and traversing the parents till I get to the Company, but this is gross.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
03-01-2014 , 05:10 PM
Materialized Paths look pretty cool. Think Im gonna mess with that next. This stuff is crazy!
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
03-01-2014 , 06:45 PM
Email I just sent to all leads and management in our department. Should be interesting to see if a big company can really change it's stripes. I tend to doubt it.

Quote:
Subject: [current project*, which I'm redacting because it has company initials] and continuous integration

*(current project is responsive/node/angular/moving from weblogic to back end to Scala/Play APi wrapped around old ATG functionality - it's like 4 major new technologies at once on a monster site - we're converting over one portion of it first)

HI all, I just want to make sure we’re all on the same page with regard to our testing framework, which we have to have as part of any kind of CI/CD progress. I noticed this was one of the goals for the year, and I don’t want there to be any mis-conceptions about where we stand or the challenges ahead to get there - at least from my point of view.

(Cliff notes: we have made some progress, but in my opinion if we just proceed as usual w/o dedicating more time, resources and maybe some expert guidance, we will never get to real automated integration testing or unit testing)

We have tried 4 or 5 different testing framework/test runner/test browser combinations so far. Right now for unit testing we are settled on karma/jasmine/phantom. For integration I am still torn between grunt-casper/casper/phantom and karma/jasmine/phantom. It would be nice to be on one framework - but karma is ridiculously slow when going against live APIs. The unit tests run against stub data so speed isn’t a concern. There might be a way to speed up karma but I need time to research, or someone else with time to research.

The idea is for integration tests is to make sure the page loads or /json calls work, then do some kind of minimal sanity/happy path testing (like does the carousel load if user clicks View All). The major challenges to integration testing are that our data is always in flux, our test users tend to get corrupted very quickly, tests break constantly due to UI changes, and the testing framework is very hard to debug when it acts weird. So we will probably need to keep our integration tests as basic as possible. The optimal level of detail is something we will have to determine over time. We are using grunt-casper, which allows us to run up to 10 tests in parallel. This is great for speed, but grunt-casper has proven to be kind of flaky. If I ever get some time or we have an expert resource to dedicate to this - I’d like to just fork it, fix the flaky stuff and get it to do exactly what we need.

For this reason, more detailed testing will be done with unit tests that go against stub data. These will attempt to cover every piece of functionality that is coded in. The major challenge here is going to be changing our culture to include writing test cases and hopefully get to real test-driven development. In my opinion, this isn’t going to happen w/o a commitment from all of [department] at every level, and a willingness to accept delays as every developer learns a new framework, new approach, and how to write proper test cases. If we just proceed as usual, automated unit testing will immediately get pushed aside for the first major delivery deadline. If we somehow pull it off, this will be invaluable going forward for maintaining website quality, living documentation of all the existing functionality and ease of future enhancements. It would be like night and day compared to where we are now.

I realize that dev leads and app architects bear a huge chunk of this responsibility, and I think we’re all willing to accept that. But we need to bake this in to the project plan, make a mandatory requirement for delivery, and we need support when we get resistance due to perceived slowness at first (which should be made up later in the project, and especially in maintaining the code over time). We can’t just talk about test-driven development and expect it to magically happen.

I wish I had a month to dedicate to researching these issues right now. But I have just inherited 25 or so [outside consultant who just rolled off] bugs (with more to come I m sure), and with XXX absent I will probably have to take on more angular/front end JS bugs. This is probably a good thing in the long run because I need more familiarity with our front end framework - as I have been mostly buried in node the last few months, and I will need to help convert what we have in [project] to a shared framework for the other coming apps like self-care. Soon I will be also need to focus on converting our node framework into a library for Self-Care to share with [project], documenting our node framework, the shared front-end framework and documenting that, and getting the self-care devs up and running. And of course endless craziness until we finally get [project] out the door. I have been told [project] takes priority over anything else and I’m already spending 50-60 hours a week on that. I’m not complaining, just pointing out that I really don’t have any bandwidth for major side projects right now.

I thought YYY (the guy who failed his drug test) was going to be the magic bullet for this. He had tons of experience with integration testing and seemed really excited about setting up the framework. If there is any chance we could look for someone else like him this would do worlds of good toward us actually achieving our goal of real automated testing. I’m not sure why we gave up looking for another crack JS dev. With [other project] pulling off resources we are still really strapped for front-end devs. I guess ZZZ might come in through [offshore partner] - which could really help if I’m allowed to put him solely on testing for a while. But If that doesn’t happen there doesn’t seem to be much else in the pipeline.

Either that or maybe bring in another [rockstar consultant]-type to just get us off the ground. I feel like I can figure all this stuff out myself given enough time. But if we could get a head start like we got from [rockstar consultant] with node, it would make a world of difference. We have learned the hard way that outside consultants don’t work very well for feature work. But at least with [rockstar consultant], that worked out great to give us a jump start on setting up a framework with brand new technology and keep us from making the usual rookie mistakes.

Just my 2 cents, thanks for listening
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
03-01-2014 , 08:30 PM
Quote:
Originally Posted by Shoe Lace
Dave, I wouldn't know what others do. I only take it to the point I described. I still pay huge attention to latency and the perceived load time of sites I create but now instead prematurely optimizing things that don't make sense to do I just look at the final results and take care of the low hanging fruit.

I mean, I'm using rails which is supposedly the slowest thing in the world but the sites I create usually score low 90s in those google/yahoo speed tests and routinely get 40-60ms response times in chrome's network tab while serving dynamic content from an ec2 micro instace that I ping 20ms to.
Once again, I don't take much stock in benchmarks. I guess I'll pull framework shootout that pops up on HN every few months: http://www.techempower.com/benchmarks/

Now, I can't talk much about all of these frameworks, but I can talk about the Clojure ones. There are 4 different Clojure "frameworks" that are discussed here: Jetty, http-kit, Compojure, and Luminus (if there are others, forgive me). First issue at hand is that none of these are frameworks. Jetty and http-kit are nothing more than servlets. Basically, all they do is compile down into a .jar file and serve your resources to :3000 or whatever you tell it to. Jetty and http-kit can be used with any java-based system, including Java, JRuby, Jython, Scala, etc.

Next, Compojure is sort of a framework, but only in the nominal sense that you are given the ability to create routes and few extra tools to work with. The real framework here is Ring. Luminus is nothing close to a framework. It is simply a way to layout out your files. Finally, no sane person deploys Compojure (and by extension: Luminus) raw: it is explicitly stated on Compojure not to serve up raw .clj files and it explains why, thus, you wind back to getting utterly worthless and meaningless benchmarks since you have to compile to some other servlet (Jetty, http-kit, which once again are not frameworks) to get something dependable.

I'm not sure if RoR has some intermediate step or if you deploy it directly. The big issue is that there are a million variables, and I wouldn't know how using RoR -vs- SomethingElse makes much of a difference. You have to consider the server, resource size, image sizes, javascript and css frameworks, and of course, database optimization and code -> db -> code latency. I suppose one small improvement would be gained if you built the same site with mostly Ruby code and compared the same site to nothing but gem installs, but that is something you would know more about.

The point is that I don't believe that RoR is inherently "slow." I imagine you still enjoy minimal and clean code, thus your work would be much easier to optimize and is likely fundamentally clean and optimized from the start. Granted, there are going to be some issues, but unlikely a spaghetti blob of junk and complex dependencies. I suspect though, that if you were attempting to "speed up" RoR, you'd be well using some JRuby and compiling down to a java servlet, but the very last place I'd look is RoR itself.

Last edited by daveT; 03-01-2014 at 08:48 PM. Reason: .jar, not .zip,...
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote

      
m