FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > CentOS > CentOS

 
 
LinkBack Thread Tools
 
Old 10-18-2008, 12:43 PM
Robert Nichols
 
Default ls and rm: "argument list too long"

Les Mikesell wrote:

Robert Nichols wrote:


These shouldn't make any difference. The limit is on the size of the
expanded shell command line.


Really?

$ M=0; N=0; for W in `find /usr -xdev 2>/dev/null`; do M=$(($M+1));
N=$(($N+${#W}+1)); done; echo $M $N

156304 7677373

vs.

$ /bin/echo `find /usr -xdev 2>/dev/null`
bash: /bin/echo: Argument list too long

For the first case, the shell never tries to pass the list as command
arguments.
It builds the list internally, limited only by memory size, and
processes the

words one by one.


Is that peculiar to bash? I thought the `command` construct was
expanded by shells into the command line before being evaluated.


I can't answer for how any particular shell allocates its internal memory,
but yes, the shell does read the entire output from `command` before
evaluating it. If this data is simply being used internally it never
gets passed to the kernel as an argument to exec() and thus can never
result in errno==E2BIG (7, "Argument list too long").

--
Bob Nichols "NOSPAM" is really part of my email address.
Do NOT delete it.

_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 10-18-2008, 01:18 PM
Kevin Krieser
 
Default ls and rm: "argument list too long"

On Oct 17, 2008, at 7:58 PM, thad wrote:


Satchel Paige - "Don't look back. Something might be gaining on you."


On Fri, Oct 17, 2008 at 4:36 AM, Laurent Wandrebeck
<l.wandrebeck@gmail.com> wrote:

2008/10/17 Jussi Hirvi <greenspot@greenspot.fi>:
Since when is there a limit in how long directory listings CentOS
can show
(ls), or how large directories can be removed (rm). It is really
annoying to

say, for example

rm -rf /var/amavis/tmp

and get only "argument list too long" as feedback.

Is there a way to go round this problem?

I have CentOS 5.2.

- Jussi

try something like:
for i in /var/amavis/tmp/*
do
rm -rf $i
done


it should be:

for i in `ls /var/amavis/tmp`
do
rm $i
done
_______________________________________________



Taking into account the valid objections others have mentioned, such
as problems of embedded whitespace in names, rm -rf $i and rm $i above
are not the same.
Even if there are no directories under the /var/amavis/tmp/, depending
on aliases, etc, rm $i may prompt you for confirmation. the other
will go ahead and do the remove if you have permission to do it (or at
least the -f).


The -r for files is unnecessary, and offends me when I see people do
it, but doesn't really cause any harm


I personally either rm -rf directory, and recreate the directory if
necessary, or do a find /var/amavis/tmp -type f ... because of
experience through the years with too long of command lines. Unixes
in the past had even smaller limits. xargs most frequently, and if
things fail, I may just do -exec rm -f {} ; on the find.

_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 10-19-2008, 01:13 AM
mouss
 
Default ls and rm: "argument list too long"

Jussi Hirvi a écrit :
> Since when is there a limit in how long directory listings CentOS can show
> (ls), or how large directories can be removed (rm). It is really annoying to
> say, for example
>
> rm -rf /var/amavis/tmp
>
> and get only "argument list too long" as feedback.


I doubt this. "argument list too long" is a shell error, and in your
command the shell doesn't see many arguments.

I guess you want to remove amavisd-new temp files and you did
rm -rf /var/amavis/tmp/*

In this case, the shell would need to replace that with
rm -rf /var/amavis/tmp/foo1 /var/amavis/tmp/foo2 ....
in which case, it needs to store these arguments in memory. so it would
need to allocate enough memory for all these before passing them to the
rm command. so a limitation is necessary to avoid consuming all your
memory. This limitation exists on all unix systems that I have seen.


>
> Is there a way to go round this problem?
>

Since amavisd-new temp files have no spaces in them, you can do
for f in in /var/amavis/tmp/*; do rm -rf $f; done
(Here, the shell does the loop, so doesn't need to expand the list at
once).

alternatively, you could remove the whole directory (rm -rf
/var/amavis/tmp) and recreate it (don't forget to reset the owner and
permisions).



> I have CentOS 5.2.
>
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 10-19-2008, 01:28 AM
Kevin Krieser
 
Default ls and rm: "argument list too long"

On Oct 18, 2008, at 8:13 PM, mouss wrote:


Jussi Hirvi a écrit :
Since when is there a limit in how long directory listings CentOS
can show
(ls), or how large directories can be removed (rm). It is really
annoying to

say, for example

rm -rf /var/amavis/tmp

and get only "argument list too long" as feedback.



I doubt this. "argument list too long" is a shell error, and in your
command the shell doesn't see many arguments.

I guess you want to remove amavisd-new temp files and you did
rm -rf /var/amavis/tmp/*

In this case, the shell would need to replace that with
rm -rf /var/amavis/tmp/foo1 /var/amavis/tmp/foo2 ....
in which case, it needs to store these arguments in memory. so it
would
need to allocate enough memory for all these before passing them to
the

rm command. so a limitation is necessary to avoid consuming all your
memory. This limitation exists on all unix systems that I have seen.




Is there a way to go round this problem?



Since amavisd-new temp files have no spaces in them, you can do
for f in in /var/amavis/tmp/*; do rm -rf $f; done
(Here, the shell does the loop, so doesn't need to expand the list at
once).

alternatively, you could remove the whole directory (rm -rf
/var/amavis/tmp) and recreate it (don't forget to reset the owner and
permisions).




I have CentOS 5.2.




Possible to learn something new every day. I would have expected the
for loop to fail too, thinking it would attempt to expand the wildcard
before starting it's iteration.


_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 10-19-2008, 02:08 AM
Damian S
 
Default ls and rm: "argument list too long"

> and get only "argument list too long" as feedback.
>
> Is there a way to go round this problem?
>
> I have CentOS 5.2.
>
I'm not going to repeat some of the good advice given to you by others
as to how to avoid this error, but will instead tell you this is related
to the ARG_MAX variable.
The standard limit for linux kernels up to 2.6.22.xxxx is 131072 chars.
This can be confirmed by typing:
getconf ARG_MAX
Until CentOS uses the 2.6.23 kernel (or later) in which the length of
arguments is constrained only by system resources, you'll need to use
scripting techniques which are more parsimonious.


_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 10-24-2008, 11:49 AM
Johnny Hughes
 
Default ls and rm: "argument list too long"

Jussi Hirvi wrote:

>> piping ls to xargs should do the trick. man xargs for details.
>
> Ok, thanks for ideas, Laurent and Lawrence.
>
> A strange limitation in ls and rm, though. My friend said he hasn't seen
> that in Fedora.

This issue is in Fedora, Ubuntu, CentOS, RHEL, (put any other linux
version you want here).

When you get too many files in a directory, you will receive this error.
The same SOURCE code is compiled regardless of the "Distro".

As you have seen. there are many solutions to this problem ... HOWEVER,
picking a new distro is not one of them

Most people never hit this limitation, but it is certainly possible and
there in all versions of Linux.

Thanks,
Johnny Hughes

_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 10-24-2008, 03:48 PM
fred smith
 
Default ls and rm: "argument list too long"

On Fri, Oct 24, 2008 at 06:49:02AM -0500, Johnny Hughes wrote:
> Jussi Hirvi wrote:
>
> >> piping ls to xargs should do the trick. man xargs for details.
> >
> > Ok, thanks for ideas, Laurent and Lawrence.
> >
> > A strange limitation in ls and rm, though. My friend said he hasn't seen
> > that in Fedora.
>
> This issue is in Fedora, Ubuntu, CentOS, RHEL, (put any other linux
> version you want here).
>
> When you get too many files in a directory, you will receive this error.
> The same SOURCE code is compiled regardless of the "Distro".
>
> As you have seen. there are many solutions to this problem ... HOWEVER,
> picking a new distro is not one of them
>
> Most people never hit this limitation, but it is certainly possible and
> there in all versions of Linux.
>
> Thanks,
> Johnny Hughes
>

I've always understood it to be an issue with commandline length: somewhere
(probably in bash) there's a limit on how big a buffer is/can be used for
storing the comamndline.



--
---- Fred Smith -- fredex@fcshome.stoneham.ma.us -----------------------------
I can do all things through Christ
who strengthens me.
------------------------------ Philippians 4:13 -------------------------------
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 10-24-2008, 04:15 PM
"Bart Schaefer"
 
Default ls and rm: "argument list too long"

On Fri, Oct 24, 2008 at 8:48 AM, fred smith
<fredex@fcshome.stoneham.ma.us> wrote:
> I've always understood it to be an issue with commandline length: somewhere
> (probably in bash) there's a limit on how big a buffer is/can be used for
> storing the comamndline.

There are two possible buffer limits one could encounter: tty driver
input line buffer (which is not an issue for bash because readline
avoids it) and kernel exec space for the arguments plus environment
passed to a new process. Only the second one causes the error message
that started this thread, and previous posts have pointed out that
recent Linux kernels have effectively removed that limit (see message
from Jeremy Sanders).
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 10-24-2008, 04:31 PM
Bill Campbell
 
Default ls and rm: "argument list too long"

On Fri, Oct 24, 2008, Bart Schaefer wrote:
>On Fri, Oct 24, 2008 at 8:48 AM, fred smith
><fredex@fcshome.stoneham.ma.us> wrote:
>> I've always understood it to be an issue with commandline length: somewhere
>> (probably in bash) there's a limit on how big a buffer is/can be used for
>> storing the comamndline.
>
>There are two possible buffer limits one could encounter: tty driver
>input line buffer (which is not an issue for bash because readline
>avoids it) and kernel exec space for the arguments plus environment
>passed to a new process. Only the second one causes the error message
>that started this thread, and previous posts have pointed out that
>recent Linux kernels have effectively removed that limit (see message
>from Jeremy Sanders).

While current Linux kernels may have removed the limit, this has
been a common issue on all *nix systems for decades, which is why
xargs was written.

As a general rule, it's best to use find to pipe lists to xargs
rather than depend on the characteristics of the underlying
system. This might be called defensive programming, as it
insures that scripts will work anywhere, not just on the system
you are using today.

Programming to the lowest common denominator may not feel sexy,
but it can prevent many headaches in the future. I spent quite a
bit of time many years ago getting a large FORTRAN system working
that had been written on a system that use 7 character variable
names where standard FORTRAN only permitted 6 (it was amazing how
many of the variable names differed only in the 7th character).
While this would be relatively easy to deal with today, it was a
bitch when all programs were on 80-column punch cards.

Bill
--
INTERNET: bill@celestial.com Bill Campbell; Celestial Software LLC
URL: http://www.celestial.com/ PO Box 820; 6641 E. Mercer Way
Voice: (206) 236-1676 Mercer Island, WA 98040-0820
Fax: (206) 232-9186

We shouldn't elect a President; we should elect a magician.
Will Rogers
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 10-24-2008, 04:45 PM
Les Mikesell
 
Default ls and rm: "argument list too long"

Bill Campbell wrote:



There are two possible buffer limits one could encounter: tty driver
input line buffer (which is not an issue for bash because readline
avoids it) and kernel exec space for the arguments plus environment
passed to a new process. Only the second one causes the error message
that started this thread, and previous posts have pointed out that
recent Linux kernels have effectively removed that limit (see message
from Jeremy Sanders).


While current Linux kernels may have removed the limit,


It's probably a mistake to say that the limit is removed. I think this
change just moves the limiting factor elsewhere - to the RAM or virtual
memory that happens to be available.



this has
been a common issue on all *nix systems for decades, which is why
xargs was written.


Recognizing that you do not have infinite buffer space available is a
good thing. Keep using xargs.


--
Les Mikesell
lesmikesell@gmail.com

_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 

Thread Tools




All times are GMT. The time now is 04:30 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright ©2007 - 2008, www.linux-archive.org