8

I recently committed a large changeset (~7,000 files) to my svn repository. These 7,000 files only make up 5% of the overall size of the repository which uses FSFS backend and served with svnserve 1.7. Ever since, checkouts to revisions after this mega-commit take 20x longer.

What is Subversion doing internally that is causing the slowdown and is there a way to fix this?

Updates

  1. While manually checking out the bad revision I can see the point at which the checkout starts to slow down. The checkout starts be adding files to the working copy very quickly (you can't read the tty output quickly enough). Once the checkout reaches a certain directory (bad revision adds 2,000 files to this directory (which contains 17,000 files already)) files are added to the working copy significantly slower (like 5 files a second) for the rest of the checkout. The revision just before the bad one adds files to the working copy very quickly the entire time. The files in this directory are about 1KB each.

  2. I compiled my own version of svnserve for versions 1.6 and 1.7 --with-debuging and --with-gprof so that we could get some insight into what's going on. Some further poking shows that some enhancements made in svnadmin 1.7 related to in-memory caching are actually killing it at this revision. I.e., serving the repository with svnserve 1.6 makes this problem go away. I'm guessing it's the in-memory caching discussed at http://subversion.apache.org/docs/release-notes/1.7.html#server-performance-tuning based on the gprof profiles for checkout times at at the bad revision (and the one right before it). At rBAD, certain svn fsfs to memory cache functions are called about 2,000,000,000 times more than in rGOOD.

4

1 に答える 1