It could be the killer combination of server technologies: unified object storage with sharded, distributed big data. Imagine Hadoop clusters whose locales transcend both geographies and clouds, and whose contents can be addressed the same way as any other file. It could help bridge the current gap between big data clusters and regulated, relational databases.
Red Hat is planning such a move, as part of its ongoing beta of what’s now called Red Hat Storage 2.
Apache Hadoop is becoming the de facto infrastructure environment for pushing data across a distributed infrastructure to then later analyze with MapReduce for such uses as optimizing web pages, personalizing content or increasing t... Read Post
Hadoop is designed to store big data cheaply on a distributed file system across commodity servers. How you get that data there is your problem. And it’s a surprisingly critical issue because Hadoop isn’t a replacement for existing ... Read Post