DragonFly BSD
DragonFly kernel List (threaded) for 2010-09
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: Thoughts on Quotas


From: Michael Neumann <mneumann@xxxxxxxx>
Date: Wed, 29 Sep 2010 23:35:43 +0200

Am Mittwoch, den 29.09.2010, 20:48 +0200 schrieb Rumko:
> Jasse Jansson wrote:
> <snip>
> > 
> > While you're at it, why don't make two kinds of snapshots.
> > 
> > 1. A user initiated snapshot, usnap, that the user controls and counts
> > towards the quota limit.
> 
> These already exist? They are called backups ... rsync, cpdup, cp, bacula,
> amanda, etc. already do this and as the source you can provide them with any
> snapshot that you have access to or even the current dataset if you don't care
> if it's changing while you're copying it.
> It might be nice from certain perspectives (though, I'd disagree) but in any
> case, this is off-topic, we were supposed to be discussing user/pfs quotas
> with current HAMMER fs and in this case, even though the user only has
> indirect control over snapshots (asking the admin to manage them and by
> limiting rewrites of files), quotas are supposed to limit the user's impact on
> disk space ... if historical data is not accounted for, there is in fact no
> limit on the user's impact on disk space (e.g. you give the user a hard quota
> of 1GB ... the user will take a 1TB file, split it into 1GB sized parts and
> start saving it over and over itself ... the user will still be able to access
> most/all those parts from file's history, but will only pay the price of the
> last piece or in an even worse scenario, will fill up your drive and the other
> nicer users wont be able to use your services).

Maybe we could keep statistics of the number of bytes each user writes.
The number of bytes written in total combined with the accumulated size
of all files might be a more precise quota system and would somehow
mitigate the problems with evil users, but evil users would still be
able to produce more data, just by writing a single byte to different
blocks, each time generating new history records. Or simply by using
mmap.

Regards,

  Michael




[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]