[WBEL-users] OT: Fwd: Re: Booting linux host, getting error message
(SOLVED)
James Knowles
jamesk@ifm-services.com
Fri, 13 Aug 2004 09:46:23 -0600
Thanks for the additional info. Like I said, this is a new discovery for
me.
>* that the tmpfs filesystem will have a certain capacity at mount time
>(whatever is specified by size= parameter, or 1/2 physical RAM by
>default),
>
>
Looks like you're right, here. I see 504MB available on my 1GB RAM
system without using any size parameter.
>* the maximum amount of VM that will be consumed is
>bounded by the size= parameter, and that it cannot grow without bounds
>
>
I've not tested boundary behaviour. I was happy to find tmpfs because
/var/run consumes very little space and I wanted to get rid of those
pesky boot-time messages without hacking init scripts.
>* As files are removed or the unmapped, the space is not returned to
>the VM
>
This appears to be incorrect. The behaviour that I see is that the space
is returned immediately, which is correct from a generic filesystem
design standpoint.
>the semantics of a file system are such that the data
>remains in place even if it's not utilised,
>
No, there are no such semantics in a file system (leaving room for
special-purpose file systems). What you describe is not by design but a
side-effect of the fact that magnetic disk storage technology is
persistent. Leaving that old data around is bad from a security
standpoint, but good from a performance standpoint.
If the medium is not persistent, then there is no reason why the old
data must persist.
>/dev/anon functionality
>
I'm not familar with this, except from the article. It appears to be
solving a different problem space revolving around the kernel's
interaction with filesystems, and not tmpfs itself. The Linux kernel is
pretty aggressive about caching because disks are very slow compared to
RAM and CPU. With UML, the kernel is unaware that it's not sitting on an
atypical situation.
I could be wrong there, of course. That's what I gleaned from a quick scan.