Open Side Menu Go to the Top
Register
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** ** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD **

05-17-2017 , 12:02 AM
Quote:
Originally Posted by Larry Legend
I spent a lot of time early on in this bootcamp getting to know the JS iteration methods, now whenever I have a challenge in front of me, I feel like I always want to use a forEach loop.

Its nice being somewhat productive with crude tools and strategies, but I really need to expand my horizons and learn a lot more features.

Is Code wars a good way to practice stuff? any other suggestions?
exercism.io is by far the best. they will let you know when they think you are using to many loops and recommend that you find an enumerable method.

best you just post examples on here and someone will probably show you a better way of doing it.

ruby docs are pretty insanely amazing at showing you the available methods.

https://ruby-doc.org/core-2.4.1/Enumerable.html

Last edited by OmgGlutten!; 05-17-2017 at 12:08 AM.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-17-2017 , 12:07 AM
Quote:
Originally Posted by Grue
like I said no one makes actual new applications with jquery selecting elements and showing/hiding them any more, they use other stuff. Not to say they aren't still around but they're all legacy.
What is being used in place of jquery?

So far, I have been learning react in terms of Rails and React and it has been mostly heavy thus far on just rendering objects - obviously since it is not from the perspective of an entirely front-end SPA, and I have not seen much in the way of DOM manipulation.

jquery is so simple and straight forwarded tho.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-17-2017 , 12:49 AM
Quote:
Originally Posted by RustyBrooks
I feel like I've asked this before but I can't remember, and I'm not happy with the solution I currently have.

I work on a backend API. We had to add throttling because people come along and endlessly fetch pages. As fast as they can, until they get everything they want.

So I added throttling. Initially, when someone got throttled it immediately returned them a 429. It's pretty fast to throttle people and they try again immediately so often we'd be returning like 50 req/s per throttled user. I finally decided to add a small delay, since most people are fetching data serially in a loop. The delay means that most people just have 2 throttled req/s.

But some dude came along tonight and I guess was fetching results massively parallel because even with the 500ms delay he was, on his own, causing 150 req/s. This also means that lots of connections were being held open, delayed by the throttle.

So what's the solution here? I'd like a way to keep them from hitting our backend servers I guess. In front of those is an AWS ELB, can I tell the ELB "this guy needs to go away for 60s" or something like that? Should I just allocate a ****load of threads so that all the ones that are holding connections open for throttled users don't get too choked to respond to normal requests? I wish there was a way for the ELB or some other upstream layer to just redirect the user somewhere else.
Isn't this a simple nginx configuration?

http://nginx.org/en/docs/http/ngx_ht...eq_module.html

Quote:
Sets the shared memory zone and the maximum burst size of requests. If the requests rate exceeds the rate configured for a zone, their processing is delayed such that requests are processed at a defined rate. Excessive requests are delayed until their number exceeds the maximum burst size in which case the request is terminated with an error 503 (Service Temporarily Unavailable).
This seems to be the exact behavior you need.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-17-2017 , 09:51 AM
1. we aren't using nginx (well, we serve static content from it, but not the backend)
2. that's per ip, we do it per user when possible and IP when not
3. we have different service levels - users get more requests/hour than unauthed people

It sounds like they do the same thing as us though, delay the return of throttle requests.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-17-2017 , 10:57 AM
Quote:
Originally Posted by RustyBrooks
1. we aren't using nginx (well, we serve static content from it, but not the backend)
Do you have any web server or reverse proxy at all in front of your application? Generally speaking, you don't want to expose an API backend directly to the internet. I don't think ELB does much more than load balancing though I could be wrong.

Quote:
2. that's per ip, we do it per user when possible and IP when not
You can do it per user - just uniquely identify the user in a header and use that as the key. You can also set more than one rate limit.

Quote:
3. we have different service levels - users get more requests/hour than unauthed people
It's possible you agreed to some super complex service level agreements that are hard to implement at this level because you have to read a bunch of database tables to figure out their rate limit. But either way, you want to handle this as far away from the application as possible and not reinvent the wheel because there are battle-tested solutions and whatever you come up with is going to be difficult to test across a wide range of scenarios that may occur. If for some reason you have to do it directly in the api backend, just about every web framework has some package does this for you.

Quote:
It sounds like they do the same thing as us though, delay the return of throttle requests.
Except the part where they don't let connections queue up indefinitely (unlike your second approach) nor drop connections as soon as they hit the the rate limit (unlike your first approach). It's basically a configurable combination of the two approaches you tried.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-17-2017 , 12:01 PM
Quote:
Originally Posted by Grue
Use mongoose it's slightly less slutty
We're using it today, tis nice.

Should you define a model and a schema in the same file, or should you separate the schema into another file?
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-17-2017 , 12:19 PM
The problem with mongoose is when you need to access mongo via some other avenue. I know one company that had to rip out all their mongo and replace it with postgres or something, because they had grown too big and needed better lockdown control of DB schemas and referential integrity.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-17-2017 , 12:51 PM
Isn't that just mongo itself not a problem with mongoose?

I put my schema and model in the same file and export mongoose.model() on it.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-17-2017 , 12:54 PM
Well yeah, his point was he needed more DB-level data integrity controls. He could get that with mongoose but not everything was going through mongoose to access mongo.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-17-2017 , 02:03 PM
Well I don't need it, was just checking on the convention. I assumed they were in the same place but wouldn't have been surprised if it was a convention to separate them.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-17-2017 , 09:08 PM
Quote:
Originally Posted by RustyBrooks
I feel like I've asked this before but I can't remember, and I'm not happy with the solution I currently have.

I work on a backend API. We had to add throttling because people come along and endlessly fetch pages. As fast as they can, until they get everything they want.

So I added throttling. Initially, when someone got throttled it immediately returned them a 429. It's pretty fast to throttle people and they try again immediately so often we'd be returning like 50 req/s per throttled user. I finally decided to add a small delay, since most people are fetching data serially in a loop. The delay means that most people just have 2 throttled req/s.

But some dude came along tonight and I guess was fetching results massively parallel because even with the 500ms delay he was, on his own, causing 150 req/s. This also means that lots of connections were being held open, delayed by the throttle.

So what's the solution here? I'd like a way to keep them from hitting our backend servers I guess. In front of those is an AWS ELB, can I tell the ELB "this guy needs to go away for 60s" or something like that? Should I just allocate a ****load of threads so that all the ones that are holding connections open for throttled users don't get too choked to respond to normal requests? I wish there was a way for the ELB or some other upstream layer to just redirect the user somewhere else.
How did you implement throttling? How many servers are you running? Configure the ELB or put nginx in front of your service. Nginx is very lightweight and can colocate it with your service.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-17-2017 , 09:17 PM
We actually do user based rate limiting and afaik nginx won't work unless you have a way to uniquely identify users before authentication.

We use a custom ruby class that does rate limiting. We use Memcache as the data store and static configs on rate limits for regular user admin and per customer in some cases. It's really not a lot of code. Maybe 50 lines of ruby code.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-17-2017 , 09:20 PM
Memcache key userid and value is requests. Key expires in 1 min so we can throttle per minute rate.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-17-2017 , 09:28 PM
Ours sounds similar to yours. We're running 6 servers but it can increase under load. We're also using memcache but we have a rolling 60 minute window instead of 1 minute. I have considered reducing it to 10 minute or 15 minute window by frankly our memcache server is not taxed even a little bit. The throttling portion of each request is less than 1ms.

Unless nginx can use a shared memory database like memcache, then we would have the problem of having only per-host throttling. I guess you could do it probabilistically (6 servers means make each one have a limit of X/6)

Our code is well under 50 lines, maybe as few as 20 or 30. Most of the throttling code is actually a little lua program that runs in memcache.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-17-2017 , 09:49 PM
What kind of keyboards are you guys rocking? Need a decent one for work
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-17-2017 , 09:57 PM
At home I have an ancient mechanical keyboard. It is loud as **** but I like it. At work I have one of the (wired) mac keyboards, which is OK. **** the wireless ones though.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-17-2017 , 10:31 PM
Quote:
Originally Posted by RustyBrooks
Unless nginx can use a shared memory database like memcache, then we would have the problem of having only per-host throttling.
1 nginx instance should be more than enough for 6 servers - if you need redundancy, you can failover. Colocating nginx with app service like muttiah said is pretty common, but that doesn't mean that has to be where throttling happens - it's pretty common to have multiple levels of reverse-proxying and load-balancing.

Quote:
Originally Posted by muttiah
We actually do user based rate limiting and afaik nginx won't work unless you have a way to uniquely identify users before authentication.
You could still do this with nginx - identifying users with a unique key should be easy since authentication isn't necessary here and you could read from memcache if you really need to do - though the more complex logic you need, the more it makes to sense to separate this out to its own service. You still want to do some basic rate limiting at this level regardless of what's implemented upstream and beyond that I would try to optimize for reduced operational complexity, though that could mean very different things depending on your scale.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-17-2017 , 10:37 PM
Quote:
Originally Posted by RustyBrooks
At home I have an ancient mechanical keyboard. It is loud as **** but I like it. At work I have one of the (wired) mac keyboards, which is OK. **** the wireless ones though.
Currently have an older model of the wireless one
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-17-2017 , 11:40 PM
Quote:
Originally Posted by PJo336
What kind of keyboards are you guys rocking? Need a decent one for work
Got a Das Keyboard with cherry brown keys. It's actually more silent than the cheap Dell keyboards once you get used to not bottoming out. Bought it 4 years ago and could probably use it for another 4 years.

I'm thinking of getting the Topre RGB now. Mechs are that awesome to use.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-18-2017 , 12:03 AM
not to be that guy but I'm absolutely marking out that I have 31 people playing my game on my side project right now
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-18-2017 , 01:02 AM
Congratulations, Grue. That's easier said than done.

One of the things I find most shocking is how incredibly difficult it is to get traffic.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-18-2017 , 02:04 AM
It's not that hard, you just have to work at it like you do anything else.

You put 30 hours a week into gaining traffic and in 4 months you'll have gotten somewhere.

People still think it's as easy as writing a few blog posts and posting them to a few places. Organically gaining traffic from nothing is hard work, but very very possible.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-18-2017 , 06:13 AM
Got a Cherry mechanical keyboard - love it!

Larry is right. Also big mistake people make with traffic is focusing entirely on acquisition of traffic. Equally as important is having a website with value that is easy to use and informative will retain traffic and keep them coming back.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-18-2017 , 11:50 AM
Quote:
Originally Posted by Larry Legend
It's not that hard, you just have to work at it like you do anything else.

You put 30 hours a week into gaining traffic and in 4 months you'll have gotten somewhere.
It's not hard to pay rent and eat food, you only have to work 40 hours a week.

It's not hard to code up a decent side project, you only have to work 30 hours a week.

It's not hard to create YouTube videos if that's relevant to your project, you only have to work 35 hours a week for each 6 minutes of video.

It's easy to grow a community, you only need to work 20 hours a week.

It's easy to stomp out bungs that you accidentally deployed, you only need to work 20 hours a week.

You see where this is going, Solo Entrepreneur?

Quote:
People still think it's as easy as writing a few blog posts and posting them to a few places. Organically gaining traffic from nothing is hard work, but very very possible.
You are correct, it's not that difficult to get 100 visits a day, but you end up with 100% bounce, 1 page per visit, and no one is going to stick around and play Grue's games.

I'd rather have 15% bounce, 33% returns, 5 pages per visit. Those are my real numbers and I don't think it's easy to do at all.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-18-2017 , 12:20 PM


heh
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote

      
m