This is a discussion on Re: hardlink not working with directories on same file system - Linux ; On 27 Oct 2004 08:28:40 -0700, email@example.com (Michael Paoli) wrote: > From what I've seen on LINUX, it appears even superuser (root) > is disallowed from hard linking a directory. This might be > considered a good thing, or a ...
On 27 Oct 2004 08:28:40 -0700,
firstname.lastname@example.org (Michael Paoli) wrote:
> From what I've seen on LINUX, it appears even superuser (root)
> is disallowed from hard linking a directory. This might be
> considered a good thing, or a bad thing, depending on one's
> perspective. My guestimate is it was/is a design decision
> (it may be a matter of libraries/utilities, and not necessarily
Doesn't cp (or is it ln?) have the ability to copy a directory, but hard-linking the files instead of copying them?
You would end up with two separate donk directories (in fact, the entire directory tree if it had subdirectories), but all the files therein would be hardlinked to the source files. Then, after the chroot, you would either delete the new directory, or merge any additions/deletions back into the original directory.
Last time I did a chroot jail type thing, I went a little overboard and wrote a set of three scripts and one utility program. The first script copied a list of files to another location; read-only files (from the perspective of the user who was going to be using the jail) were hard-linked, while anything writable was copied. The second script then hard-linked files which were listed as modifiable (those files were previously stripped out of the list given to the first script). The process was then run within its chroot. Finally, the third script was used to search the modifiable list and sync any changes back to the source directory, writing a list of new and deleted files (created using diff against an output file of the first script) to the nominated filename if one was provided (in case I ever wanted to write a fourth script to sync some of those changes back into the original directory).
The first script was my favorite, expanding the list using 'ls -il', generating a new file list with the needed inode numbers and file access modes. First all the directories were grep'd out, and created in the new location. Then the read-only files were grep'd out and hard-linked to their new locations. Finally read-write files were grep'd out and copied normally. A handy side-effect of this three-pass process was that files which were neither readable nor writable, weren't copied into the new directory structure at all. Any files that were in the modifiable list were hard-linked along with the read-only ones, and omitted from the file copy list. After all was said and done, I could just delete the new directory tree or leave it.
To speed things up, I've actually written a small program that works a much like tee, but with multiple output files, each preceded by a regular expression, a few optional grep-like arguments (such as -v and -x), and an optional argument indicating that matching lines should or should not be passed through to the next filter. (Output files could also be duplicated, with each match expression optionally skipping any lines already sent to that output file, which I added later for another task) For use with the above script, it was basically; directories to file 1 (each stage was set to not pass through matched lines), read-only files to file 2 (the last six characters of the file mode were non-w's), everything else went to stdout, which was redirected to file 3 (group and globally writable files). It's a surprisingly useful little utility, the source of which was unfortunately lost when the drive containing my home directory crashed.
Anyhow, it all worked very nicely. Though I wouldn't suggest it for large directory tree's... Use a bind mount for them.