Friday, May 5, 2017

New CS:APP song!

Prof. Casey Deccio at BYU wrote this catchy ditty about CS:APP called Hello Summer Break. It's pretty awesome!

Tuesday, May 2, 2017

The Memory Mountain L2 Bump

In a previous post, I showed how the different editions of CS:APP have used the memory mountain as a way to visualize memory system performance.  It demonstrates how the cache hierarchy affects performance, which in turn can be used by application programmers to write more efficient programs.

Here's the memory mountain for a recent Intel processor:

 As the label indicates, there is a curious bump in the curve corresponding to cases where data are held in the L2 cache, and where the throughput as a function of the strides does not go down smoothly.  Looking back at the other mountains, you'll see this phenomenon for other cases as well, specifically Intel processors that employ prefetching.

Let's investigate this phenomenon more closely.  Quite honestly, though I have no explanation for why it occurs.

In the above processor, a cache block contains eight 8-byte long integers.  For a stride of S, considering only spatial locality, we would expect a miss rate of around S/8 for strides up to 8.  For example, a stride of 1 would yield one miss followed by 7 hits, while a stride of 2 would yield one miss followed by 3 hits.  For strides of 8 or more, the miss rate would be 100%.  If read incurring a cache miss incurs a delay M, and one incurring a hit incurs a delay H, then we would expect the average time per access to be M*S/8 + H*(1-S/8).  The throughput should be the reciprocal of the average delay.

For the larger sizes, where data resides in the L3 cache, this predictive model holds fairly well.  Here are the data for a size of 4 megabytes:

In this graph, the values of M and H were determined by matching the data for S=1 and S=8.  So, it's no surprise that these two cases match exactly.  But, the model also works fairly well for other values of S.  

For sizes that fit in the L2 cache, however, the predictive model is clearly off:

We can see the bump.  The data tracks well for S=2, but for S=3, the actual memory system outperforms the predictive model.  The effect drops off for larger values of S.

I have experimented with the measurement code to see if the bump is some artifact of how we run the tests, but I believe this is not the case.  I believe there is some feature of the memory system that causes this phenomenon.

I would welcome any ideas on what might cause memory mountains to have this bump.



A Gallery of Memory Mountains

Through all 3 editions, CS:APP has used memory mountains to illustrate how the cache hierarchy affects memory system performance.  Here we compare the memory mountains of different processors over the years, revealing evolutionary changes in memory system design.

Here's the mountain from the First Edition, based on a 1999 Intel Pentium III Xeon:



The memory mountain shows the throughput achieved by a program repeatedly reading elements from an array of N elements, using a stride of S (i.e., accessing elements 0, S, 2S, 3S, ..., N-1).  The performance, measured in megabytes (MB) per second, varies according to how many of the elements are found in one of the processor's caches.  For small values of N, the elements can be held in the L1 cache, achieving maximum read throughput.  For larger values of N, the elements can be held in the L2 cache, and the L1 cache may be helpful for exploiting spatial locality for smaller values of S.  For large values of N, the elements will reside in main memory, but both the L1 and L2 cache can improve performance when S enables some degree of spatial locality.

By way of reference, the use of the memory mountain for visualizing memory performance was  devised by Thomas Stricker while he was a PhD student at CMU in the 1990s working for Prof. Thomas Gross.  Both of them now live in Switzerland, with Thomas Gross on the faculty at ETH.

Jumping forward 9 years, the above figure illustrates the performance for a 2009 iMac that I still use as a home computer.  It has the same qualitative characteristics of the Pentium III, with two levels of cache.  Overall, it has over 10X higher throughput.  You'll also notice how smooth the curves are.  That's partly because I rewrote the code last year to give more reliable timing measurements.

 For the second edition of CS:APP, we used a 2009 Intel Core i7, using the Nehalem microarchitecture.  The data here are noisy—they predate my improved timing code.  There are several important features in this mountain not found in the earlier ones.  First, there are 3 levels of cache.  There is also a striking change along the back edge.  The performance for a stride of 1 stays high for sizes up to around 16MB, where the data can still be held in the L2 cache.  It reflects the ability of the memory system to initiate prefetching, where it observes the memory access pattern and predicts which memory blocks will be read in the future.  It copies these blocks from L3 up to L2 and L1.  Then when the processor reads these memory locations, the data will already be in the L1 cache.  The overall performance is well short of that measured for the contemporary (2008) Core Duo shown above.  This could partly be due to differences in the timing code–the newer version uses 4-way parallelism when reading the data, whereas the old code was purely sequential.


For the third edition of CS:APP, we used a 2013 Intel Core i5, using the Haswell microarchitecture.  The above figure shows measurements for this machine using the improved timing code.  Overall, though, the memory system is similar to the Nehalem processor from CS:APP2e.  It has 3 levels of cache and uses prefetching. Note how high the overall throughputs are.

 The final mountain uses measurements from a Raspberry Pi 3.  The Pi uses a processor included as part of a Broadcomm "system on a chip" designed for use in smartphones.  The processor is based on the ARM Cortex-A53 microarchitecture.  Performance-wise, it is much slower than a contemporary desktop processor, but it is very respectable for an embedded processor.  There's a clear 2-level cache structure.  It also appears that some amount of prefetching might occur with both cache and main memory accesses.

Over the nearly 20-year time span represented by these machines, we can see that memory systems have undergone evolutionary changes.  More levels of cache have been added, and caches have become larger.  Throughputs have improved by over an order of magnitude.  Prefetching helps when access patterns are predictable.  It's interesting to see how the visualization provided by memory mountains enables us to see these qualitative and quantitative changes.








Tuesday, April 4, 2017

Sample Profiling Code Available

Section 5.14 of CS:APP3e demonstrates how to use the code profiling tool gprof to identify slow portions of a program.  It uses as an example a dictionary program that can compute n-gram statistics about a body of text.  Here's the measurements from profiling the code when computing the bigram statistics of all of Shakespeare's works:



We have made all of the code used in this example available on the CSAPP student website, as part of the Chapter 5 materials.  The code has been tested on multiple Linux systems.  You can see how to code does when running on your machine.

Chinese Version of Third Edition Available

The Chinese version of CS:APP3e was published by China Machine Press in November, 2016.


More information is available at the publisher's website.
The translation was performed by Yili Gong and Lian He of Wuhan University.  We are grateful for the conscientious job Yili has done for all three editions of the book.


Thursday, February 18, 2016

Buffer Overflow Vulnerability Discovered in glibc

There's a report out today from Google that their security team discovered a buffer overflow vulnerability in the GNU implementation of getaddrinfo.  Readers of Chapter 11 of CS:APP3e know this function as a very general tool for converting string representations of network parameters into the data structures used by other networking functions.  Engineers at Google and Red Hat were able to demonstrate that the program error could be exploited with a buffer overflow attack.

It's instructive to read the bug tracking reports at the Google post on their discovery:

https://googleonlinesecurity.blogspot.com/2016/02/cve-2015-7547-glibc-getaddrinfo-stack.html

as well as the bug tracking log covering the actual error:

https://sourceware.org/bugzilla/show_bug.cgi?id=18665

There are several important insights to be gained from this report:


  • Buffer overflows are still a key source of software vulnerabilities.  Although they can be mitigated by address space randomization and other techniques, they still show up.
  • This bug was introduced in with glib 2.9 in May, 2008.  It was first reported in July, 2015 and fixed in February, 2016.  That's a long time for a security vulnerability to lie undetected.
  • It only happens when a string is given that exceeds the 2048-byte limit of the regular buffer size.  The code is then allocates more memory, but it does not correctly update some of the size information properly.  Apparently, this part of the code was not tested very carefully.  It's an unfortunate reality of program testing that it's hard to reach all of the corner cases in a program.  It seems like using code coverage tools could have been beneficial here.


Tuesday, February 9, 2016

Updated the CS:APP Proxy Lab

We've updated the CS:APP Proxy Lab with a new autograder that checks for basic proxy behavior, concurrent execution, and file caching. We've been using this autograder at CMU for several years now and are happy to make it available to the CS:APP community.

Tuesday, January 12, 2016

Updated the CS:APP Bomb Lab

We've released an update to the Bomb Lab on the CS:APP site. An authentication key associated with each bomb prevents spoofing(from Zaheer Chothia, ETH, Switzerland). And a configurable timeout in the request daemon prevents it from hanging while interacting with clients under heavy loads (from Len Hamy, Macquarie University, Australia).

Monday, January 11, 2016

New x86-64 Attack Lab is Available!

We are pleased to announce that the new Attack Lab is available on the CS:APP site.

The Attack Lab was first offered to CMU students in Fall 2015. It is the 64-bit successor to the 32-bit Buffer Lab and was designed for CS:APP3e. In this lab, students are given a pair of unique custom-generated x86-64 binary executables, called targets, that have buffer overflow bugs. One target is vulnerable to code injection attacks. The other is vulnerable to return-oriented programming attacks. Students are asked to modify the behavior of the targets by developing exploits based on either code injection or return-oriented programming.

Wednesday, August 26, 2015

Diane's silk dress costs $89

What could a woman's wardrobe have to do with computer systems?

This is a clever mnemonic devised by Geoff Kuenning of Harvey Mudd College to help him remember which registers are used for passing arguments in a Linux x86-64 system:

%rdi:   Diane's
%rsi:   Silk
%rdx:   dress
%rcx:   costs
%r8:    $8
%r9:    9

Thanks to Geoff for providing this helpful aid!

Tuesday, June 2, 2015

Third Edition: Ready for Fall Courses

The Third Edition of Computer Systems: A Programmer's Perspective came out in March.  The CS:APP web page now contains information for this edition, with a link to the web pages for the second edition.  We already have a (fortunately small) errata page.

This fall, we will be teaching 15-213, the CMU course that inspired the book originally.  Leading up to that, we will update the lecture slides and the labs, and we will be making that available on the instructors' site.

Wednesday, February 11, 2015

The third edition will be out March 11, 2015

We spent many of our 2014 hours writing and revising the book.  We feel it will bring the book up to date, and that the presentation of some of the material will be more clear.

According to Amazon, the book will be available starting March 11.

Here are some chapter-by-chapter highlights:
  • Ch. 2 (Data): After hearing many students saying ``It's too hard!'' we took a closer look and decided that the presentation could be improved by more clearly indicating which sections should be treated as informal discussion and which should be studied as formal derivations (and possibly skipped on first reading).  Hopefully, these guideposts will help the students navigate the material, without us reducing the rigor of the presentation.
  • Ch. 3 (Machine Programming): It's x86-64 all the way!  The entire presentation of machine language is based on x86-64.  Now that even cellphones run 64-bit processors, it seemed like it was time to make this change.  Eliminating IA32 also freed up space to put floating-point machine code back in (it was present in the 1st edition and moved to the web for the 2nd edition).  We generated a web aside describing IA32.  Once students know x86-64, the step (back) to IA32 is fairly simple.
  • Ch. 4 (Architecture): Welcome to Y86-64!  We made the simple change of expanding all of the data widths to 64 bits.  We also rewrote all of the machine code to use x86-64 procedure conventions.
  • Ch. 5 (Optimization): We brought the machine-dependent performance optimization up to date based on more recent versions of x86 processors.  The web aside on SIMD programming has been updated for AVX2.  This material becomes even more relevant as industry looks to the SIMD instructions to juice up performance.
  • Ch. 7 (Linking): Linking as been updated for x86-64.  We expanded the discussion of position-independent code and introduce library inter positioning.
  • Ch. 8 (Exceptional Control Flow): We have added a more rigorous treatment of signal handlers, including signal-safe functions.
  • Ch. 11 (Network Programming): We have rewritten all of the code  to use new libraries that support protocol-independent and thread-safe programming.
  • Ch. 12 (Concurrent Programming): We have increased our coverage of thread-level parallelism to make programs run faster on multi-core processors.

Friday, June 13, 2014

Third edition in the works

We've gotten started on the third edition of CS:APP.  The biggest change will be that we will shift entirely to 64 bits.  It seems like that shift has finally occurred across most systems, and so we can say goodbye to 32-bit systems.

Here's a summary of the planned changes for each chapter.
  1. Introduction.  Minor revisions.  Move the discussion of Amdahl's Law to here, since it applies across many aspects of computer systems
  2. Data.  Do some tuning to improve the presentation, without diminishing the core content.  Present fixed word size data types.
  3. Machine code.  A complete rewrite, using x86-64 as the machine language, rather than IA32.  Also update examples based on more a recent version of GCC (4.8.1).  Thankfully, GCC has introduced a new opimization level, specified with the command-line option `-Og' that provides a fairly direct mapping between the C and assembly code.  We will provide a web aside describing IA32.
  4. Architecture.  Shift from Y86 to y86-64.  This includes having 15 registers (omitting %r15 simplifies instruction encoding.), and all data and addresses being 64 bits.  Also update all of the code examples to following the x86-64 ABI conventions.
  5. Optimization.  All examples will be updated (they're mostly x86-64 already).
  6. Memory Hierarchy.  Updated to reflect more recent technology.
  7. Linking.  Rewritten for x86-64.  We've also expanded the discussion of using the GOT and PLT to create position-independent code, and added a new section on the very cool technique of library interpositioning.
  8. Exceptional Control Flow.  More rigorous treatment of signal handlers, including async-signal-safe functions, specific guidelines for writing signal handlers, and using sigsuspend to wait for handlers.
  9. VM.  Minor revisions.
  10. System-Level I/O.  Added a new section on files and the file hierarchy.
  11. Network programming.  Protocol-independent and thread-safe sockets programming using the modern getaddrinfo and getnameinfo functions, replacing the obsolete and non-reentrant gethostbyname and gethostbyaddr functions.
  12. Concurrent programming.  Enhanced coverage of performance aspects of parallel multicore programs.
The new edition will be available March, 2015.

Wednesday, March 27, 2013

Updated the CS:APP Bomb Lab

We've just released an update to the Bomb Lab on the CS:APP site. It fixes a bug caused by the fact that on some systems, hostname() doesn't return a fully qualified domain name.

Tuesday, January 22, 2013

The CS:APP Cache Lab

We've released a new lab, called the Cache Lab, that we've been using at CMU in place of the Performance Lab for a few semesters. In this lab, students write their own general-purpose cache simulator, and then optimize a small matrix transpose kernel to minimize the number of cache misses. We've found that it really helps students to understand how cache memories work, and to clarify key ideas such as spatial and temporal locality.

Monday, November 12, 2012

Peking University Report

 
I just returned from a trip to Peking University (PKU).  They have recently adopted CS:APP as the textbook for their course "Introduction to Computer Systems," (ICS) patterned after the course we teach at CMU, (the course for which CS:APP was originally written.)

They now require ICS for all CS majors.  Moreover, as part of an initiative by the president of the university, they are teaching it in a form where they have the usual lectures, but they also hold weekly recitation instructions taught by faculty members.  It is one of six courses being taught across the entire university in this format this term.  Here are some statistics for this term:

  • 167 students
  • 14 recitations sections (12 students each)
  • 14 faculty doing recitations
  • 8 faculty doing lectures
That's a lot of resources to devote to a single course!

Monday, June 11, 2012

Chinese Translations of CS:APP



In a recent blog post, I noted that 52% of all copies of the CS:APP that have been sold were in Chinese.  Prof. Yili Gong of Wuhan University did the translations for both the first and second editions of the book.  Prof. Gong has also been a valuable contributor to our errata.

I recently came back from a trip to China, where I gave lectures about CS:APP at both Peking University and Tsinghua University, both of which use the book in their courses.  Looking at our adoptions list, there are only 8 universities in China that we know of using CS:APP as a textbook.  Apparently, the vast majority of copies sold in China are being used by individuals for self study.

Wednesday, May 30, 2012

Who Reads CS:APP?

I gathered some data on the total sales of the various versions of CS:APP.  It's now in its second edition, and it has appeared in multiple languages:
  • English.  Including versions published in India (1st edition only) and China (1st and 2nd edition) for readers in those two countries
  • Chinese (1st and 2nd edition)
  • Korean (2nd edition)
  • Russian (1st edition)
  • Macedonian (1st edition)
All told, as of Dec. 31, 2011, a total of  116,574 books have been sold, across all editions, versions, and formats (paperback, hardcopy, e-book).  The following pie chart shows how this divides across the language categories (sorry, no statistics on Macedonian, but I imagine the numbers are fairly small):





One thing that's clear is that we're very popular in China: fully 52% of the total has been in Chinese, and another 15% has been the English version for the Chinese market.

Thursday, May 17, 2012

Update to the Bomb Lab

We've updated the Bomb Lab sources on the CS:APP site to address a problem that arises when students from previous semesters run their old bombs while the current instance of the lab is underway.

The Bomb Lab servers assign diffusions and explosions to Bomb IDs, rather than users, and Bomb IDs start over from scratch each term. Thus, if a student  who took the class last semester ran their old bomb while the lab was as underway this semester, then the explosions and diffusions from the old bomb would be incorrectly assigned to the current bomb with the same Bomb ID.

To address this, we've added a per-semester identifier, called $LABID,  to the Bomb Lab config file. Instructors can set this variable each term (for example $LABID="f12") to uniquely identify each offering. Any results from previous bombs with different $LABIDs are ignored.

Thanks for Prof. Godmar Bak, Virginia Tech, for pointing this out.