Quote:
Originally Posted by saw7988
Let me see if I understand correctly... with AJAX, I'd essentially be pinging something server-side every X seconds (or maybe in a continuous loop?), so my visual updates would then be tied to that sampling rate. With websockets - is there some kind of callback/event-driven method so that code can get called automatically when the server sends something over?
websockets are bidirectional. So, you can send info to the server, you can ask the server for info, or the server can stream info to you as it gets it. It's much more efficient to do this if you want very up-to-date information.
Quote:
Oooh I am not; never heard of these, but I am very curious to learn more. Got a link?
This is sort of a press release for a cm-resolution one
http://www.embedded-computing.com/ne...or-positioning
And I think this guy is using something similar, in the previous generation:
https://hackaday.io/project/18296-lo...ization-system
Quote:
I'm very very interested in new techniques, since I don't have something in mind that I really like. Right now, I have a RPi+camera+IMU system that's packaged up and will be mounted on the Roomba. I've got a rough track right now with pure IMU odometry, and am thinking I can augment with some computer vision/object recognition. A visual SLAM thing would probably be the holy grail but I just haven't gotten around to trying to find some code. Oh, and plus, the camera kinda sucks and blurs super easily, which is why I'm currently ignoring the visual odometry algorithm I wrote.
I've done a fair bit of different kinds of image processing stuff. By far the biggest problems I've had are that cameras suck (good ones exist but are expensive) and that conditions change a lot (like say lighting).
I've read some really good books on robotic navigation, that might interest you. I really liked "Probabilistic Robotics" although admittedly I have not implemented anything based on the techniques in it
https://www.amazon.com/gp/product/02...?ie=UTF8&psc=1
although I have n
Quote:
The obvious solution that would make this trivially easy would be to have stationary cameras that just recognize/track the Roomba from afar. But there's no 1 spot that can see enough of the area, and I just really don't want to be mounting multiple cameras in the corners of the ceiling. Minimal environment modification is for sure a self-imposed constraint, but if those antenna things you're talking about are small and don't need LOS, and I can stick them in the corner of the room or something that could work. Oh and this is supposed to be a super low budget project too
There are probably some interesting low-tech low-cost methods. I honestly don't know how much these positioning antennae cost, I have no use for them.
Here's something I might try: Put a really bright LED on the roomba. Use 2 cameras to capture room views. Filter channels to pass through LED color and block most everything else. When you have direct LOS it should be easy to locate the roomba, when you don't, I bet that you could infer it from the general pattern of light on the walls, by using known positions and matching outputs from the cameras, and a machine learning algorithm.
(I kind of feel like it would work in theory, but, well, you know)