On Wed, 1 Oct 2003, Jeremy Katz wrote:
I also continue to think that the caching is going to trash things more and cause the actual package installation that's occurring + whatever %post (especially ldconfig) to be that much slower. Unfortunately, that's just a gut feeling that I can't back up until I have some spare time to do some dirt simple proof of concept benchmarking.
The biggest rpm we have currently is 40 MB, and the average package size is 1.4 MB. More than half of the total rpm size is in rpms that are smaller than 10 MB. With a 128 MB recommended RAM size this should be problem-free from a caching point of view. Maybe a special rule could be added, to not pre-cache RPMs with a size larger than RAM_size/4.
also, if the caching is done by copying the next rpm to HD while the rpm -i of the previous one is running then even if trashing happens, it will happen on the HD level, not on the CDROM level - and it's the CDROM that is the fundamental bottleneck of installs.
- mount the ext3 target volumes ext2 while installing (this could still apply in days of dir_index and ACLs)
We used to do this in the 7.2 timeframe, but the time difference on an install was negligible.
it's not the HD that is keeping up things - it's the non-overlap of CDROM and HD IO that hurts. We use the CDROM, then we use the HD to install the rpm, then we use the CDROM again, etc. - instead of using them in parallel and cutting latencies into half.
ext2/ext3 only speeds up the HD access (by a very small amount). Also, ext2/ext3 mostly differs in CPU overhead not IO overhead - and the install-to-hd process is mostly limited by IO latencies (disk seeks).
Ingo