Aggregation is easy; Curration is hard.
The continuing advance in technology and our adapting utilization is going to require attention to filtering online content.
Where there were once only tens-of-thousands of authors and perhaps a million literate readers prior to Guttenberg, there is now parity in the author-to-reader with counts in the billions. The increase in computing power, decreases in cost and size, and increased affordability and access make accelerating change inevitable. How will we cope with the data deluge?
We are connected by any of several devices, from DVD players to gaming consoles to computers and mobile devices. Inanimate objects like houses, appliances, and automobiles will soon contribute to the streams. We are probably creating hundreds of Libraries of Congress in static information per year and the dynamic information is literally a data firehose. How do you make sense of it all on the greater internet, let alone the little slice to which you pay attention.
As to the filter itself - should it be a social filter, where you see things that either people you know or people like you have read? Should it be based on popularity (page views) or most shared (via email or links on social media)? Should it be filtered by authority or reputation of the author? What's next will be answering the question of how curration is done and if it should be human and/or machine based?
We need a system that seeks to open the JoHari window [wiki] - minimize unknown-unknowns and known-unknowns, allows the surfacing of unknown-knowns, and maximizes and exploits known-knowns.
My fear is that many of the answers to those questions will create information silos, bottlenecks, blind spots, stove pipes (see partisan blogospheres), knee-jerk contrarianism (see Slate) or just like legacy media pander to the lowest common denominators of titillation and sensationalism (see HuffPo). Unfortunately, as it stands, the economics of the internet rely on the later with the greatest need for the former.