FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Debian > Debian User

 
 
LinkBack Thread Tools
 
Old 05-12-2011, 10:36 AM
Juha Tuuna
 
Default Tracing Filesystem Accesses

On 12.5.2011 13:19, Rainer Dorsch wrote:
> Hello,
>
> I added an SSD in my system and moved the root filesystem to the SSD (which
> includes now also most of /home in my system). I spin down the regular hard
> disks and the system is a lot more quiet than before :-)
>
> Sometimes though something is accessing data on the disk drives, which I do
> not understand.
>
> Is there a way to trace all accesses to a directory tree (e.g. /mnt/disk) ?
>
> Is there another way to find out which data are accessed and if possible by
> which process?
>
> Thanks,
> Rainer

You could try installing auditd.
http://packages.debian.org/squeeze/auditd

--
Juha Tuuna


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 4DCBB83C.6070609@iki.fi">http://lists.debian.org/4DCBB83C.6070609@iki.fi
 
Old 05-12-2011, 11:30 AM
Alex Mestiashvili
 
Default Tracing Filesystem Accesses

On 05/12/2011 12:19 PM, Rainer Dorsch wrote:

Hello,

I added an SSD in my system and moved the root filesystem to the SSD (which
includes now also most of /home in my system). I spin down the regular hard
disks and the system is a lot more quiet than before :-)

Sometimes though something is accessing data on the disk drives, which I do
not understand.

Is there a way to trace all accesses to a directory tree (e.g. /mnt/disk) ?


to be honest I've never used it , but manual page looks promising

inotify - inotify-tools

inotifywatch -v -e access -e modify -t 60 -r /proc

Is there another way to find out which data are accessed and if possible by
which process?

Thanks,
Rainer



may be one of this tools :
iotop ,

lsof | grep /mnt/disk


Regards ,
Alex






--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org

Archive: 4DCBC4D9.2090903@biotec.tu-dresden.de">http://lists.debian.org/4DCBC4D9.2090903@biotec.tu-dresden.de
 
Old 05-12-2011, 11:42 AM
Miles Fidelman
 
Default Tracing Filesystem Accesses

Rainer Dorsch wrote:

Is there a way to trace all accesses to a directory tree (e.g. /mnt/disk) ?

Is there another way to find out which data are accessed and if possible by
which process?



for files that are kept open by particular processes, you might play
with fuser and lsof (see man pages)


you could try setting /proc/sys/vm/block_dump to 1 - which will log
every disk access to syslog (see
http://sprocket.io/blog/2006/05/monitoring-filesystem-activity-under-linux-with-block_dump/)
- though I expect auditd (as someone else suggested) would be less painful


I also seem to recall that there's something in the /proc filesystem
that provides a running list of file operations


take a look at iwatch - that might be exactly what you want (haven't
played with it myself) - see

http://prefetch.net/blog/index.php/2009/02/28/monitoring-file-activity-on-linux-hosts/

--
In theory, there is no difference between theory and practice.
In<fnord> practice, there is. .... Yogi Berra



--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org

Archive: 4DCBC7BC.7040802@meetinghouse.net">http://lists.debian.org/4DCBC7BC.7040802@meetinghouse.net
 
Old 05-13-2011, 06:53 AM
Stan Hoeppner
 
Default Tracing Filesystem Accesses

On 5/12/2011 5:19 AM, Rainer Dorsch wrote:

Hello,

I added an SSD in my system and moved the root filesystem to the SSD (which
includes now also most of /home in my system). I spin down the regular hard
disks and the system is a lot more quiet than before :-)

Sometimes though something is accessing data on the disk drives, which I do
not understand.


Did you relocate swap to the SSD?

--
Stan


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org

Archive: 4DCCD582.1090207@hardwarefreak.com">http://lists.debian.org/4DCCD582.1090207@hardwarefreak.com
 
Old 05-13-2011, 07:38 AM
Doug
 
Default Tracing Filesystem Accesses

On 05/13/2011 02:53 AM, Stan Hoeppner wrote:

On 5/12/2011 5:19 AM, Rainer Dorsch wrote:

Hello,

I added an SSD in my system and moved the root filesystem to the SSD
(which
includes now also most of /home in my system). I spin down the
regular hard

disks and the system is a lot more quiet than before :-)

Sometimes though something is accessing data on the disk drives,
which I do

not understand.


Did you relocate swap to the SSD?

According to some information on the various lists, you should *not* run
swap on
a SSD, because the SSD has a limited number of read/write cycles, and
swap uses
them up way too quickly. I guess you could use a second ssd for swap and
when it

died, throw it out and replace it.

-doug

--
Blessed are the peacekeepers...for they shall be shot at from both sides. --A. M. Greeley


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org

Archive: 4DCCE004.5050804@optonline.net">http://lists.debian.org/4DCCE004.5050804@optonline.net
 
Old 05-13-2011, 11:50 AM
Stan Hoeppner
 
Default Tracing Filesystem Accesses

On 5/13/2011 2:38 AM, Doug wrote:

> According to some information on the various lists, you should *not* run
> swap on
> a SSD, because the SSD has a limited number of read/write cycles, and
> swap uses
> them up way too quickly.

That's pure FUD. Read the following soup to nuts:
http://www.storagesearch.com/ssdmyths-endurance.html

You've read *speculation*. There are hundreds of thousands of folks
around the globe using SSDs right now in their workstations for OS +
swap, and in high concurrent write load servers, mainly mail spools. A
busy mail spool has a higher localized write load than swap. In either
case I've yet to read of an SSD failing due to worn out cells.

I replaced a failed 4 year old Seagate Barracuda 120GB in my WinXP
workstation less than a month ago with a 32GB Corsair Nova SSD:
http://www.corsair.com/cssd-v32gb2-brkt.html

It was the cheapest ~30GB available at the time, $65 USD at Newegg, on
sale ($79 now). I partitioned 15GB for XP + aps + swap file, saving the
other 15GB, maybe for a Squeeze desktop install. Ping me in 5 years and
I'll let you know if this SSD has failed due to worn out cells.

--
Stan


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 4DCD1B23.9040206@hardwarefreak.com">http://lists.debian.org/4DCD1B23.9040206@hardwarefreak.com
 
Old 05-13-2011, 12:49 PM
Miles Fidelman
 
Default Tracing Filesystem Accesses

Stan Hoeppner wrote:

On 5/12/2011 5:19 AM, Rainer Dorsch wrote:

Hello,

I added an SSD in my system and moved the root filesystem to the SSD
(which
includes now also most of /home in my system). I spin down the
regular hard

disks and the system is a lot more quiet than before :-)

Sometimes though something is accessing data on the disk drives,
which I do

not understand.


Did you relocate swap to the SSD?


What do you have under root vs. your hard drives. There's lots of stuff
going on all the time - network activity, mail spools, cron jobs, logging.



--
In theory, there is no difference between theory and practice.
In<fnord> practice, there is. .... Yogi Berra



--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org

Archive: 4DCD28C2.6010205@meetinghouse.net">http://lists.debian.org/4DCD28C2.6010205@meetinghouse.net
 
Old 05-13-2011, 01:52 PM
Paul E Condon
 
Default Tracing Filesystem Accesses

On 20110513_065059, Stan Hoeppner wrote:
> On 5/13/2011 2:38 AM, Doug wrote:
>
> > According to some information on the various lists, you should *not* run
> > swap on
> > a SSD, because the SSD has a limited number of read/write cycles, and
> > swap uses
> > them up way too quickly.
>
> That's pure FUD. Read the following soup to nuts:
> http://www.storagesearch.com/ssdmyths-endurance.html
>
> You've read *speculation*. There are hundreds of thousands of folks
> around the globe using SSDs right now in their workstations for OS +
> swap, and in high concurrent write load servers, mainly mail spools. A
> busy mail spool has a higher localized write load than swap. In either
> case I've yet to read of an SSD failing due to worn out cells.
>
> I replaced a failed 4 year old Seagate Barracuda 120GB in my WinXP
> workstation less than a month ago with a 32GB Corsair Nova SSD:
> http://www.corsair.com/cssd-v32gb2-brkt.html
>
> It was the cheapest ~30GB available at the time, $65 USD at Newegg, on
> sale ($79 now). I partitioned 15GB for XP + aps + swap file, saving the
> other 15GB, maybe for a Squeeze desktop install. Ping me in 5 years and
> I'll let you know if this SSD has failed due to worn out cells.
...snip

Stan,

I'm sure there can be progress in any technology, but it is surely
true that there was once-upon-a-time, a re-write problem in the
underlying chip technology that goes into today's SSDs. I tend to use
cast off older stuff in my home computing. When, in the past, would
you say that the SSD technology became reliable? It sort of puts a
cutoff on just how old I should put up with. Or did the technology
problems get solved before anything called SSD get offered on the
comsumer market?

And, the rewrite story for thumb drives ( I think that is what the
small, fit in your pocket USB devices are called. ) is the story also
FUD, or do they use a different, inferior technology?


TIA

--
Paul E Condon
pecondon@mesanetworks.net


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 20110513135208.GB2290@big.lan.gnu">http://lists.debian.org/20110513135208.GB2290@big.lan.gnu
 
Old 05-13-2011, 06:34 PM
Bob McConnell
 
Default Tracing Filesystem Accesses

Paul E Condon wrote:

On 20110513_065059, Stan Hoeppner wrote:

On 5/13/2011 2:38 AM, Doug wrote:


According to some information on the various lists, you should *not* run
swap on
a SSD, because the SSD has a limited number of read/write cycles, and
swap uses
them up way too quickly.

That's pure FUD. Read the following soup to nuts:
http://www.storagesearch.com/ssdmyths-endurance.html

You've read *speculation*. There are hundreds of thousands of folks
around the globe using SSDs right now in their workstations for OS +
swap, and in high concurrent write load servers, mainly mail spools. A
busy mail spool has a higher localized write load than swap. In either
case I've yet to read of an SSD failing due to worn out cells.

I replaced a failed 4 year old Seagate Barracuda 120GB in my WinXP
workstation less than a month ago with a 32GB Corsair Nova SSD:
http://www.corsair.com/cssd-v32gb2-brkt.html

It was the cheapest ~30GB available at the time, $65 USD at Newegg, on
sale ($79 now). I partitioned 15GB for XP + aps + swap file, saving the
other 15GB, maybe for a Squeeze desktop install. Ping me in 5 years and
I'll let you know if this SSD has failed due to worn out cells.

...snip

Stan,


I'm sure there can be progress in any technology, but it is surely
true that there was once-upon-a-time, a re-write problem in the
underlying chip technology that goes into today's SSDs. I tend to use
cast off older stuff in my home computing. When, in the past, would
you say that the SSD technology became reliable? It sort of puts a
cutoff on just how old I should put up with. Or did the technology
problems get solved before anything called SSD get offered on the
comsumer market?

And, the rewrite story for thumb drives ( I think that is what the
small, fit in your pocket USB devices are called. ) is the story also
FUD, or do they use a different, inferior technology?


Before we go any further, lets get a couple of things sorted out. What
type of SSD (Solid State Drive) are you all talking about here?


If it contains Flash memory, then yes, there is a limit to the number of
ERASE cycles each sector can do. How long they last depends on a number
of factors, not the least of which is how old the chips are. The first
generations of flash memory chips could only be erased about 10,000
times before they started to fail. This could be mitigated by decent
firmware that did load leveling behind the scenes. But there was still a
finite limit to how long they could be used before they wouldn't erase
anymore. Newer chips can handle 100,000-250,000 erase cycles. So decent
drivers can help them last for several years even under heavy use. If
the wear is spread out over a large space, it almost appears to last
forever. But I still wouldn't want to use them for files that were
frequently replaced or rewritten. I still think of them as Read-Mostly
memory components.


Bob McConnell
N2SPP


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org

Archive: 4DCD79C0.8080508@lightlink.com">http://lists.debian.org/4DCD79C0.8080508@lightlink.com
 
Old 05-13-2011, 08:08 PM
Jochen Schulz
 
Default Tracing Filesystem Accesses

Bob McConnell:
>
> Before we go any further, lets get a couple of things sorted out.
> What type of SSD (Solid State Drive) are you all talking about here?
>
> If it contains Flash memory,

What else do you have in mind?

> then yes, there is a limit to the
> number of ERASE cycles each sector can do. How long they last
> depends on a number of factors, not the least of which is how old
> the chips are. The first generations of flash memory chips could
> only be erased about 10,000 times before they started to fail.

Current generation (consumer-grade) MLC SSDs using 25nm technology use
flash that can only be rewritten 3000 times. Assuming perfect wear
levelling, that's still enough for most desktop applications.
120GB*3000=360TB. That's still almost 100GB per day for ten years. Even
if write amplification quintuples the amount of date written, that's
still 20GB per day. My systems don't write that much.

J.
--
If I was a supermodel I would give all my cocaine to the socially
excluded.
[Agree] [Disagree]
<http://www.slowlydownward.com/NODATA/data_enter2.html>
 

Thread Tools




All times are GMT. The time now is 07:17 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org