Open Side Menu Go to the Top
Register
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** ** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD **

05-15-2017 , 03:10 PM
Quote:
Originally Posted by jjshabado
A startup has no value when it starts.
in reality, maybe.

but some value is assigned when it starts based on airy things like the track record of the team, the promise of the product (if any), the idea, etc.

air meets reality when someone agrees to invest X based on valuation Y. if you convince people it's worth Y, it is.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-15-2017 , 03:12 PM
$8k is probably the liquidation value of the back-end infrastructure.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-15-2017 , 03:18 PM
Quote:
Originally Posted by gaming_mouse
in reality, maybe.

but some value is assigned when it starts based on airy things like the track record of the team, the promise of the product (if any), the idea, etc.

air meets reality when someone agrees to invest X based on valuation Y. if you convince people it's worth Y, it is.
Sure, but founders shares are typically (always? - I'm not sure of the detailed requirements) issued before that investment happens. That's why the price can typically be set so low.

Although even after investment there's a very big difference between the price that you're making investors pay and the price that your stock is valued at for tax purposes (so things like purchase price for employees and option strike prices).
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-15-2017 , 03:32 PM
ah yes, you're right about that.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-15-2017 , 04:22 PM
Quote:
Originally Posted by Barrin6
160,000 * $.001 = $160.

If this 2%, the company was only worth $8000???
At the web development company I worked, I was granted options equaling 10% of the company for $20, valuing an 11 year old private company at $200. Also had an acceleration clause so if we got acquired I would vest to 100% instantly and not have to work for the acquirers (or they would have to give me a serious incentive to stay).

Before you raise investment you can do whatever you want.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-15-2017 , 07:27 PM
So we used mongo today, and I see why everyone says its great for hackathons and stuff.

Mongo doesn't give a ****.

I don't know if this is the correct term, but I found it funny how lazy it seems at times. I was kinda shocked to see that:

db.example.update({ values as they currently exist })

will say it successfully updated.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-15-2017 , 08:10 PM
Quote:
I would vest to 100% instantly and not have to work for the acquirers
That's a weird clause, pretty critical for key people to need to work 1-2 years for the transition. That's good leverage for you if a transaction ever is on the table.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-15-2017 , 08:22 PM
Use mongoose it's slightly less slutty
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-15-2017 , 09:06 PM
Quote:
Originally Posted by Gullanian
That's a weird clause, pretty critical for key people to need to work 1-2 years for the transition. That's good leverage for you if a transaction ever is on the table.
I think a major reason my boss agreed to it was because if we did get acquired, he would have been there until he retired. As is, I don't think they'll ever get acquired (but who knows stuff can happen), and he's gonna stick with it.

I would have been instantly out at the first opportunity. I didn't even like what we were doing, I was just really good at it because I'm pretty intense and I actually gave a **** to try and push us further all the time with better clients, bigger budgets, and modern practices.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-16-2017 , 05:57 AM
Quote:
Originally Posted by Larry Legend
I think a major reason my boss agreed to it was because if we did get acquired, he would have been there until he retired. As is, I don't think they'll ever get acquired (but who knows stuff can happen), and he's gonna stick with it.

I would have been instantly out at the first opportunity. I didn't even like what we were doing, I was just really good at it because I'm pretty intense and I actually gave a **** to try and push us further all the time with better clients, bigger budgets, and modern practices.
Sounds like you are a key person and any smart acquirer would know that! Don't know if you're still there, but if you are it's something to be aware of.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-16-2017 , 03:23 PM
Quote:
Originally Posted by Larry Legend
So we used mongo today, and I see why everyone says its great for hackathons and stuff.

Mongo doesn't give a ****.

I don't know if this is the correct term, but I found it funny how lazy it seems at times. I was kinda shocked to see that:

db.example.update({ values as they currently exist })

will say it successfully updated.
Postgres:

Code:
create table tt (
	id int,
	etc varchar
);

insert into tt (id, etc)
values (1, 'b');

update tt
set etc = 'b'
where id = 1;

=> Query returned successfully: one row affected, 62 msec execution time.
It doesn't make a lot of sense to read, check if different, then write. That's clock cycles for branching / merging operations that aren't needed.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-16-2017 , 04:19 PM
Ok all you "formally educated computer scientists" with your fancy "B-trees" and "algorithms" - I have a problem for you.

I'm considering a performance operation for a node microservice that serves image metadata to 10 other microservices. Each page a user sees could ultimately call this service dozens of times - with the potential for millions of concurrent users on the site.

Right now the service checks couchbase for the image metadata. If the metadata is there it's returned to the requesting mS and the flow is done. If the metatdata is not there, some other stuff happens in the background. But that isn't important as it only happens once(ish) per image and the mS just returns a generic response to the client.

My idea is to cache the top (most popular) 1k to 10k image metadata objects (they're small) in node resident memory, so as to avoid the async call to couchbase. Each node instance would save these in its own resident memory - after getting each image from couchbase while the cache is getting "warmed up". (We're looking at a caching layer to handle this, but that won't be ready for a while.)

So what I need is some kind of data structure that works like a first-in-first-out stack, where I can set the max # to keep in the stack, except if an image is accessed I need to pull it up to the top of the stack. Think of a deck of cards where I am adding new cards to the top, but also sometimes pulling out cards and putting them on top. When my deck hits 1000 cards, I deal a card off the bottom and throw it away.

I could just use arrays and shift/unshift/pop - but from my reading, those move every element of the array and wouldn't be very efficient.

Er okay I can already see a problem with my system. I need a way to only save popular cards. I don't necessarily want to put every new card on top of the deck. Back to my thinking chair... But anyway if there's a way to satisfy the queue I describe, I think I could modify it to not give new cards to much precedence.

Or maybe I don't care as popular cards will always stay in the deck. Hmmm. Depends on the ratio of new cards to popular cards. I think the total # of cards could get into the 100s of 1000s. But most would not be accessed very often.

Last edited by suzzer99; 05-16-2017 at 04:31 PM.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-16-2017 , 04:23 PM
Sounds like a least recently used cache. You should be able to google various implementations for it.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-16-2017 , 04:31 PM
It's called a LRU (least recently used) cache
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-16-2017 , 04:31 PM
Dang it, I should read the rest of the thread first.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-16-2017 , 07:10 PM
So if I am going to be putting together a resume and I have an app that is a commercial interest of sorts, not just a toy app, how do I go about showing off that code to potential employers (first programming job) ?
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-16-2017 , 07:11 PM
Github and have it running somewhere?
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-16-2017 , 07:25 PM
If you want to open-source the code, put it on github. If you have a site or app, then link to it. To be sure, you won't get a positive response if it is from someone who it can compete with, so I'd stay away from those types of jobs.

Some get downright angry at you for it. Unless you like being cussed at on the phone, just stay away from any job in the space you built for (unless your project sucks, then nothing to worry about, I guess).
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-16-2017 , 07:47 PM
I would put the source on Github and deploy the app to Heroku.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-16-2017 , 08:49 PM
I been on this contract for a week now. Hooray, actually getting stuff fixed.

I had to use Stack Overflow to figure out why updates, inserts, and deletes weren't working on MySQL Workbench. Apparently it has "safe updates" on by default, which means you can only select.

Guess how many times I used SO in total...
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-16-2017 , 09:30 PM
Quote:
Originally Posted by suzzer99
Ok all you "formally educated computer scientists" with your fancy "B-trees" and "algorithms" - I have a problem for you.

I'm considering a performance operation for a node microservice that serves image metadata to 10 other microservices. Each page a user sees could ultimately call this service dozens of times - with the potential for millions of concurrent users on the site.

Right now the service checks couchbase for the image metadata. If the metadata is there it's returned to the requesting mS and the flow is done. If the metatdata is not there, some other stuff happens in the background. But that isn't important as it only happens once(ish) per image and the mS just returns a generic response to the client.

My idea is to cache the top (most popular) 1k to 10k image metadata objects (they're small) in node resident memory, so as to avoid the async call to couchbase. Each node instance would save these in its own resident memory - after getting each image from couchbase while the cache is getting "warmed up". (We're looking at a caching layer to handle this, but that won't be ready for a while.)

So what I need is some kind of data structure that works like a first-in-first-out stack, where I can set the max # to keep in the stack, except if an image is accessed I need to pull it up to the top of the stack. Think of a deck of cards where I am adding new cards to the top, but also sometimes pulling out cards and putting them on top. When my deck hits 1000 cards, I deal a card off the bottom and throw it away.

I could just use arrays and shift/unshift/pop - but from my reading, those move every element of the array and wouldn't be very efficient.

Er okay I can already see a problem with my system. I need a way to only save popular cards. I don't necessarily want to put every new card on top of the deck. Back to my thinking chair... But anyway if there's a way to satisfy the queue I describe, I think I could modify it to not give new cards to much precedence.

Or maybe I don't care as popular cards will always stay in the deck. Hmmm. Depends on the ratio of new cards to popular cards. I think the total # of cards could get into the 100s of 1000s. But most would not be accessed very often.
Just use Memcached dude! Don't try to invent your own caching algorithms. Whether you need or want distributed Memcache would depend on usage patterns and how much complexity you want to introduce. Memcache with TTL is super simple and should suit your needs. There's only 2 hard problems in software engineering 1) naming 2) cache invalidations
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-16-2017 , 09:33 PM
So sounds like what you need is a priority queue sort d by number of hits. This is far too complex and silly to implement at your application layer.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-16-2017 , 11:01 PM
Quote:
Originally Posted by muttiah
Just use Memcached dude! Don't try to invent your own caching algorithms. Whether you need or want distributed Memcache would depend on usage patterns and how much complexity you want to introduce. Memcache with TTL is super simple and should suit your needs. There's only 2 hard problems in software engineering 1) naming 2) cache invalidations
Yeah I asked around and we've been playing with memcache. Hopefully they'll get it working with the node microservices soon.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-16-2017 , 11:16 PM
I feel like I've asked this before but I can't remember, and I'm not happy with the solution I currently have.

I work on a backend API. We had to add throttling because people come along and endlessly fetch pages. As fast as they can, until they get everything they want.

So I added throttling. Initially, when someone got throttled it immediately returned them a 429. It's pretty fast to throttle people and they try again immediately so often we'd be returning like 50 req/s per throttled user. I finally decided to add a small delay, since most people are fetching data serially in a loop. The delay means that most people just have 2 throttled req/s.

But some dude came along tonight and I guess was fetching results massively parallel because even with the 500ms delay he was, on his own, causing 150 req/s. This also means that lots of connections were being held open, delayed by the throttle.

So what's the solution here? I'd like a way to keep them from hitting our backend servers I guess. In front of those is an AWS ELB, can I tell the ELB "this guy needs to go away for 60s" or something like that? Should I just allocate a ****load of threads so that all the ones that are holding connections open for throttled users don't get too choked to respond to normal requests? I wish there was a way for the ELB or some other upstream layer to just redirect the user somewhere else.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote
05-16-2017 , 11:58 PM
Quote:
Originally Posted by suzzer99
Github and have it running somewhere?
It is already up and running with the whole nine yards with an ssl certificate and remotely hosted discourse message board pointing towards a sub-domain connected with sso login.

This was several months of learning to code and building this and I will be monetizing it soon.

I am pretty confident that all of the important stuff is protected with ENV variables etc but even then again I am still noobish about this stuff... just not sure that I want to have all my source code just thrown onto github so that anyone can see it.

There is also the issue that i would like to maintain some anonymity to the website and my github for a job resume will have my name on it.
** UnhandledExceptionEventHandler :: OFFICIAL LC / CHATTER THREAD ** Quote

      
m