Update: Digg now receives 230 million plus page views per month and 26 million unique visitors - traffic that necessitated major internal upgrades.
Traffic generated by Digg's over 22 million famously info-hungry users and 230 million page views can crash an unsuspecting website head-on into its CPU, memory, and bandwidth limits. How does Digg handle billions of requests a month?
• How Digg Works by Digg
• How Digg.com uses the LAMP stack to scale upward
• Digg PHP's Scalability and Performance
• APC PHP Accelerator
• Gearman - job scheduling system
• MogileFS - open source distributed filesystem
• Started in late 2004 with a single Linux server running Apache 1.3, PHP 4, and MySQL. 4.0 using the default MyISAM storage engine
• Over 22 million users.
• 230 million plus page views per month
• 26 million unique visitors per month
• Several billion page views per month
• None of the scaling challenges faced had anything to do with PHP. The biggest issues faced were database related.
• Dozens of web servers.
• Dozens of DB servers.
• Six specialized graph database servers to run the Recommendation Engine.
• Six to ten machines that serve files from MogileFS.
• Requests are passed to the Application Server cluster. Application servers consist of: Apache+PHP, Memcached, Gearman and other daemons. They are responsible for making coordinating access to different services (DB, MogileFS, etc) and creating the response sent to the browser.
• Uses a MySQL master-slave setup.
- Four master databases are partitioned by functionality: promotion, profiles, comments, main. Many slave databases hang off each master.
- Writes go to the masters and reads go to the slaves.
- Transaction-heavy servers use the InnoDB storage engine.
- OLAP-heavy servers use the MyISAM storage engine.
- They did not notice a performance degradation moving from MySQL 4.1 to version 5.
- The schema is denormalized more than "your average database design."
- Sharding is used to break the database into several smaller ones.
• Digg's usage pattern makes it easier for them to scale. Most people just view the front page and leave. Thus 98% of Digg's database accesses are reads. With this balance of operations they don't have to worry about the complex work of architecting for writes, which makes it a lot easier for them to scale.
• They had problems with their storage system telling them writes were on disk when they really weren't. Controllers do this to improve the appearance of their performance. But what it does is leave a giant data integrity whole in failure scenarios. This is really a pretty common problem and can be hard to fix, depending on your hardware setup.
• To lighten their database load they used the APC PHP accelerator MCache.
• Memcached is used for caching and memcached servers seemed to be spread across their database and application servers. A specialized daemon monitors connections and kills connections that have been open too long.
• You can configure PHP not parse and compile on each load using a combination of Apache 2’s worker threads, FastCGI, and a PHP accelerator. On a page's first load the PHP code is compiles so any subsequent page loads are very fast.
• MogileFS, a distributed file system, serves story icons, user icons, and stores copies of each story’s source. A distributed file system spreads and replicates files across a lot of disks which supports fast and scalable file access.
• A specialized Recommendation Engine service was built to act as their distributed graph database. Relational databases are not well structured for generating recommendations so a separate service was created. LinkedIn did something similar for their graph.
• The number of machines isn't as important what the pieces are and how they fit together.
• Don't treat the database as a hammer. Recommendations didn't fit will with the relational model so they made a specialized service.
• Tune MySQL through your database engine selection. Use InnoDB when you need transactions and MyISAM when you don't. For example, transactional tables on the master can use MyISAM for read-only slaves.
• At some point in their growth curve they were unable to grow by adding RAM so had to grow through architecture.
• One way they scale is by being careful of which application they deploy on their system. They are careful not to release applications which use too much CPU. Clearly Digg has a pretty standard LAMP architecture, but I thought this was an interesting point. Engineers often have a bunch of cool features they want to release, but those features can kill an infrastructure if that infrastructure doesn't grow along with the features. So push back until your system can handle the new features. This goes to capacity planning, something the Flickr emphasizes in their scaling process.
• You have to wonder if by limiting new features to match their infrastructure might Digg lose ground to other faster moving social bookmarking services? Perhaps if the infrastructure was more easily scaled they could add features faster which would help them compete better? On the other hand, just adding features because you can doesn't make a lot of sense either.
• The data layer is where most scaling and performance problems are to be found and these are language specific. You'll hit them using Java, PHP, Ruby, or insert your favorite language here.
* LinkedIn Architecture
* Live Journal Architecture
* Flickr Architecture
* An Unorthodox Approach to Database Design : The Coming of the Shard