I'm interested as well if anyone here is working on projects which push the current resources to limits.
Say,we would need to be able to have more RAM and multiple cpus processing the data.
For example We need to load 10TB of data to memory to work with large datasets but now our servers normally have only 500GB or 1TB of memory and multiple cpus cant work effectively on that data.
I gave an example what hpe is trying to do with model with hundred TB of memory and data scientist be able to work with massive datasets. Below link with picture of the idea
https://news.hpe.com/memory-driven-computing-explained/
Interested of any real world examples what people/developers may have experienced or feel are limiting factors rather than only merely looking at current development in industry.
Last edited by vento; 06-10-2017 at 09:10 AM.