FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Gentoo > Gentoo Embedded

 
 
LinkBack Thread Tools
 
Old 03-30-2010, 02:28 PM
Arkadi Shishlov
 
Default file system question

On 03/30/10 16:28, Karl Hiramoto wrote:
> On 03/30/2010 12:42 AM, David Relson wrote:
>> I'm porting the software for an embedded medical device from DOS to
>> Linux and am wondering which file systems are appropriate and which are
>> not. The device's mass storage is a Disk-on-Module solid state flash
>
> 1. Mount read only a partition that contains the main system, to ensure
> you can always boot and to avoid damage to the systems file

Depending on how you want to service (or not) you device root fs, consider using
at SquashFS for that.
 
Old 03-30-2010, 02:36 PM
Ed W
 
Default file system question

On 29/03/2010 23:42, David Relson wrote:

G'day,

I'm porting the software for an embedded medical device from DOS to
Linux and am wondering which file systems are appropriate and which are
not. The device's mass storage is a Disk-on-Module solid state flash
drive. Data is presently written at approx 100 bytes every 30 seconds
but that might change to 100 bytes every second. The device has a
watchdog (recently activated) and during today's session it was
triggered and wiped out my file system.

Anybody have recommendations on which file system to use and the
appropriate settings?

Anybody have suggested readings so I can educate myself?



In addition to what everyone else has already noted you really need to
state your tradeoff between lifetime of the flash cards and minimising
the risk of loosing data. Fundamentally you can buffer the data and
write less frequently to the device (your device has a finite number of
writes before it perishes), or you can write more frequently, hence less
risk of dataloss


Any journal'ed fs is *supposed* to be reasonably robust against disk
crashes during writes, but often there are layered levels of safety, eg
ext3 has three write modes depending how you want to trade safety for
speed (hint newest defaults are probably not optimal if your data is
very precious)


I think one of the issues you will face is that whilst a journaling fs
with appropriate options should be very safe against you flipping the
power off whenever you feel like, long term your failure mode with flash
appears to be random files disappearing and getting corrupted - the wear
levelling appears to mean that unrelated files can get trashed as the
flash shuffles data around to maximise wear levels until the card
basically pretty much gives up? I haven't experienced this yet...


I guess just plan ahead for this.

Good luck

Ed W
 
Old 03-30-2010, 03:07 PM
Karl Hiramoto
 
Default file system question

On 03/30/2010 04:14 PM, Arkadi Shishlov wrote:

On 03/30/10 16:28, Karl Hiramoto wrote:


I've used ext3 and riserfs on CompactFlash but neither seems to be 100%
error proof. I've wanted to try NILFS2 but haven't done it yet.


Let us know then, because something must be done to nilfs2_cleanerd which like
to write a lot.


Sounds like a bad idea on flash then.
 
Old 03-30-2010, 04:29 PM
Arkadi Shishlov
 
Default file system question

On 03/30/10 18:07, Karl Hiramoto wrote:
> On 03/30/2010 04:14 PM, Arkadi Shishlov wrote:
>> On 03/30/10 16:28, Karl Hiramoto wrote:
>>
>>> I've used ext3 and riserfs on CompactFlash but neither seems to be 100%
>>> error proof. I've wanted to try NILFS2 but haven't done it yet.
>>>
>> Let us know then, because something must be done to nilfs2_cleanerd
>> which like
>> to write a lot.
>>
> Sounds like a bad idea on flash then.

Its probably depends on workload a lot, plus how often cleanerd is allowed to
work or is it enabled at all.
It actually might be better for flash, cause its not hammering multiple blocks
with journal and metadata (sync) or double writes of data=journal. Working in
sequential manner is presumably advantageous for (cheap) FTL too.
Just a guess... Need someone to try it out in real product.

The built-in checkpoint/snapshot feature is nice to have.
 
Old 03-30-2010, 06:11 PM
wireless
 
Default file system question

Ed W wrote:

> In addition to what everyone else has already noted you really need to
> state your tradeoff between lifetime of the flash cards and minimising
> the risk of loosing data.


Wow, this sounds like a PC programmer's statement and not
how an embedded designer would solve the problem. Granted
the poster has not provided and adequate 'specification' so
my point of view may or may not be precisely applicable.


From what I have gleaned, the developer should first look at
a low power system, with battery backup. If it is an
embedded system designed to run as such, a simple 9V battery
should provide years/decades of reliability to 5 nines of
uptime or more..... This robust research and design approach
will solve power cycling-data loss problems....


Depending on the details of the data collection sensors and
hardware, either redesign as low power, or put them on a
separate power supply. If that, (relatively) hi power supply
is intermittent, you only loose data for those data
collection intervals. You CPU/storage module never looses
power. Some very fast AD devices are power hogs. Other
(slower) AD devices are very power efficient, especially if
they 'sleep' (low or off power mode) when not
actively needed. This is controlled



I could go on and on, but just purchasing a SBC and trying
to turn it into a robust product, is rarely successful in my
experience. USE and SBC of the shelf to get your coders
coding, and get a hardware designer to look at the
specification, and design a low power, minimized design that
has what you need and nothing else.

Then when you ramp up quantity, you'll actually make a
profit as your competition lowers their price, you can still
compete. It's impossible, in my experience, to use off the
shelf SBC and be competitive for very long. Your competition
will under cut you every time....


hth,
James
 
Old 03-31-2010, 12:44 AM
David Relson
 
Default file system question

Hello, All!

The many suggestions have been very helpful. This afternoon,
'sync' was added to fstab. No problems have been seen since.
While not conclusive, the indication is good and we're helpful.

The device in question is a medical device that continuously takes
readings, graphs them, and every 30 seconds appends a 34 byte data
record to each of two files. It's not exactly disk intensive :->

FWIW, often the files hit by the "Stale NFS file handle" problem are
symlinks in /usr/lib. Since _nobody_ writes that directory it's odd
that the problem often shows up there. That's not the only place, but
seems to be the most common one.

The suggestion of separate partitions for program and data is one I
like and shall implement. Thanks for the idea!

The power cord isn't a problem -- there's an onboard battery. However
the power button can be pressed at any time. 'Tis something that we'll
live with. Adding fsync() calls, in addition to 'sync' mounting, should
help.

Ciao!

David


On Tue, 30 Mar 2010 15:28:59 +0200
Karl Hiramoto wrote:

> On 03/30/2010 12:42 AM, David Relson wrote:
> > G'day,
> >
> > I'm porting the software for an embedded medical device from DOS to
> > Linux and am wondering which file systems are appropriate and which
> > are not. The device's mass storage is a Disk-on-Module solid state
> > flash drive. Data is presently written at approx 100 bytes every
> > 30 seconds but that might change to 100 bytes every second. The
> > device has a watchdog (recently activated) and during today's
> > session it was triggered and wiped out my file system.
> >
> It sounds like some kind of data logging application. If this is
> the case what i would do is.
>
> 1. Mount read only a partition that contains the main system, to
> ensure you can always boot and to avoid damage to the systems file
>
> 2. Mount read/write the partition that data gets written to. On boot
> you can detect errors in this partition and reformat it if is totally
> hosed.
>
> 3. In your application call fsync() on the file and perhaps the
> directory after the writes to make sure the data gets to the flash.
> If you are in fact logging, rotate your logs files so after they are
> X size write to a new file. Files that are most likely to be
> damaged are ones that are opened and being written to the moment the
> power cord gets yanked.
>
> As Manual said, using the 'sync' option may help. Also see "man
> mount" and /usr/src/linux/Documentation/filesystems/ext3.txt
>
> Other options the improve reliability are data=journal and barrier=1
>
>
> I've used ext3 and riserfs on CompactFlash but neither seems to be
> 100% error proof. I've wanted to try NILFS2 but haven't done it yet.
>
> --
> Karl
>
>
>
 
Old 03-31-2010, 01:05 AM
Peter Stuge
 
Default file system question

wireless wrote:
> design a low power, minimized design

Sure, except that's not economical below a certain threshold, say 1k
units. Maybe even 10k.

There are lots of great embedded products without mass market, but
they're still important. Making electronics is just too expensive,
and also requires a large range of special knowledge not so easy to
whip out of the sleeve.

I don't contest that custom design will be the best fit in the second
or third hardware generation, but that is quite often traded away for
other benefits - and the products still have to work..

Backup power is fairly easy to accomplish in any case, more of a
money question, but since there's a watchdog the system does need to
handle unclean shutdown in any case.


//Peter
 
Old 03-31-2010, 06:33 AM
Arkadi Shishlov
 
Default file system question

On 03/31/10 03:44, David Relson wrote:
> The many suggestions have been very helpful. This afternoon,
> 'sync' was added to fstab. No problems have been seen since.
> While not conclusive, the indication is good and we're helpful.
>
> The device in question is a medical device that continuously takes
> readings, graphs them, and every 30 seconds appends a 34 byte data
> record to each of two files. It's not exactly disk intensive :->

Even with data amount is low, you're still looking at something like ~3 x16KB
(eraseblock, which could be significantly larger) number of erases cause some
data should be updated in place. I would be writing into one file with writes
aligned to flash pagesize.
If someone hits the power button while flash is working, sync won't help on
ext2. Backup power is present, why not to wire power button so that shutdown can
be signaled and emergency sync() is performed on filesystem...
While ext3 is mature, ext4 has journal checksums.

> FWIW, often the files hit by the "Stale NFS file handle" problem are
> symlinks in /usr/lib. Since _nobody_ writes that directory it's odd
> that the problem often shows up there. That's not the only place, but
> seems to be the most common one.

Weird errors may indicate toolchain problems with in errno.h not matching kernel
version.

> The power cord isn't a problem -- there's an onboard battery. However
> the power button can be pressed at any time. 'Tis something that we'll
> live with. Adding fsync() calls, in addition to 'sync' mounting, should
> help.

I believe it is same and you need only one method.
 
Old 03-31-2010, 09:26 AM
Nebojša Ćosić
 
Default file system question

> G'day,
>
> I'm porting the software for an embedded medical device from DOS to
> Linux and am wondering which file systems are appropriate and which are
> not. The device's mass storage is a Disk-on-Module solid state flash
> drive. Data is presently written at approx 100 bytes every 30 seconds
> but that might change to 100 bytes every second. The device has a
> watchdog (recently activated) and during today's session it was
> triggered and wiped out my file system.
>
> Anybody have recommendations on which file system to use and the
> appropriate settings?
>
> Anybody have suggested readings so I can educate myself?
>
> Thank you.
>
> David
>
After having problems with EMC and usb storage, I finally fixed the
problem with following solution:
- data storage, in my case usb stick, has at least 2 partitions
- second partition is without file system. It is divided in a number of
slots, each large enough to store all of my data
- all work is performed on data stored on ram disk
- periodically (triggered by time and/or data change), I compress ram
disk and dump it in a next slot on unformatted partition
I have a small battery, which I use to do one final dump at shutdown
time.
On startup, I go through all of the slots in second partition,
searching for latest uncorrupt data, and use this to populate ram disk.
If you can live with some data loss, you don't even need battery backup.
No matter wear leveling implementation on your storage, this solution
works optimally.
It works even on your directly accessible flash storage.
Since there is no real file system on partition, there is no need for
it's recovery - besides searching for latest and greatest set of data
on startup
And it is implemented as a ash script, using tar and gzip, so your data
is actually better verified than on normal file system (the usual one
do not actually checksum data. I don't consider jffs2 to be "the usual
filesystem"
Nebojša
 
Old 03-31-2010, 11:30 AM
David Relson
 
Default file system question

On Wed, 31 Mar 2010 11:26:49 +0200
Nebojša Ćosić wrote:

> > G'day,
> >
> > I'm porting the software for an embedded medical device from DOS to
> > Linux and am wondering which file systems are appropriate and which
> > are not. The device's mass storage is a Disk-on-Module solid state
> > flash drive. Data is presently written at approx 100 bytes every
> > 30 seconds but that might change to 100 bytes every second. The
> > device has a watchdog (recently activated) and during today's
> > session it was triggered and wiped out my file system.
> >
> > Anybody have recommendations on which file system to use and the
> > appropriate settings?
> >
> > Anybody have suggested readings so I can educate myself?
> >
> > Thank you.
> >
> > David
> >
> After having problems with EMC and usb storage, I finally fixed the
> problem with following solution:
> - data storage, in my case usb stick, has at least 2 partitions
> - second partition is without file system. It is divided in a number
> of slots, each large enough to store all of my data
> - all work is performed on data stored on ram disk
> - periodically (triggered by time and/or data change), I compress ram
> disk and dump it in a next slot on unformatted partition
> I have a small battery, which I use to do one final dump at shutdown
> time.
> On startup, I go through all of the slots in second partition,
> searching for latest uncorrupt data, and use this to populate ram
> disk. If you can live with some data loss, you don't even need
> battery backup. No matter wear leveling implementation on your
> storage, this solution works optimally.
> It works even on your directly accessible flash storage.
> Since there is no real file system on partition, there is no need for
> it's recovery - besides searching for latest and greatest set of data
> on startup
> And it is implemented as a ash script, using tar and gzip, so your
> data is actually better verified than on normal file system (the
> usual one do not actually checksum data. I don't consider jffs2 to be
> "the usual filesystem"
> Nebojša

Wow! That's a robust solution!
 

Thread Tools




All times are GMT. The time now is 11:58 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org