In my first segment, I covered the historical foundation for understanding NoSQL. Now we will turn to a critical point on our the journey to the present, which is the impact of the web on our databases.
The Internet is so ubiquitous today that it is easy to forget that most of us grew up in an era when it did not exist for the masses. To say that the rise of the web “changed everything” is so obvious a statement that one would wonder why I would even type it. However, until quite recently, there was one piece of our datacenter infrastructure that did not change in a fundamental way to meet the demands of the web world, and that is the database.
In the late 80s and early 90’s, it was not just the move from IMS to relational databases that were shaking things up. We also shifted from a centralized model with mainframes and “dumb terminals” to the decentralized world of client-server. As the Internet matured, suddenly we had people connecting to our websites right from their homes, of all places! Before this shift, everything was so much easier to control because end-user access came from a static location inside an office building. With people accessing information on our servers from their home, the increase in traffic hitting our systems started growing to levels most of us had never dreamed. 
It wasn’t long before the relational databases were getting saturated with requests for information. The first answer was to reduce the bottleneck on the load coming from the Internet, which was accomplished by adding multiple web servers to control and distribute the traffic via some clever techniques. Eventually, the bottleneck moved to the database itself. Very clever system architects created read-only copies of these relational databases to handle the requests for data (which built the webpages for the end users in their browsers) and the problem remained mostly solved for several years. The reason it worked so well was because the requests to the database were almost entirely read requests — that is, getting data out of the database — because that is all you needed to build a static webpage.
As network bandwidth increased, and developers got more and more creative, suddenly the web experience started to shift from pure, read-only (static pages) to a more dynamic experience that allowed users to interact with the website. Users could type information into forms and then recall that data later. Sounds trivial now, I know, but this was pretty earth-shattering stuff just 15 years ago. The problem is, many of those fundamental design solutions from 15 years ago are implemented all over the world to this very day. Developers were going crazy with creativity around this concept of a dynamic web interface because the network speeds and reliability just kept getting better and better. Turns out, just about everything in the technology stack was fundamentally changing for the better — except, as I said, the database .
In the absence of any new database architectures, relational database gurus did the best they could with what they had. This presentation from 2010 by well-known Oracle expert, Guy Harrison, shows remarkable clarity and prescience at a time when few relational database experts grasped the changes that were afoot. Below is one of his slides depicting the state of complexity that was required for relational databases to keep pace with the ever-increasing demand not just from web “sites” — but from true web “applications.”
It was becoming clear that the proverbial dam of the relational database world was starting to show cracks. What was still unknown, however, was just how many people would really have this kind of data problem. In 2011, a little over a year after giving the presentation cited above, Guy Harrison was, himself, still unclear. He wrote: “Enterprises that don’t require a large online presence may find the scalability goals of databases such as Cassandra and HBase unnecessary.” 
It was a valid assertion in 2011. Just how many companies would really need that kind of an online presence?
Two things were looming just over the horizon that would change both the question, and the answer, for the next generation of software and its relation to data.
 Of course, at the time the web, as we know it, was being invented, companies in Silicon Valley were doing the kinds of things that would make Silicon Valley what it is today. However, 95% (maybe more) of the technical population were normal developers like me who were easily five years behind the pioneers. They were the teachers, we the students. Such a ratio and time lag is seen today when it comes to paradigm-shifting technology — it takes a while for it to filter to the masses. See Crossing the Chasm for some excellent reasons why this is the case.
 For an excellent, and more comprehensive, look at just how much everything except the database has adapted to this new world, see Patrick McFadin’s presentation: Building Antifragile Applications with Apache Cassandra on SlideShare. Although he recommends Apache Cassandra as the solution, his history slides are instructive and can stand alone as independent part of the presentation.