FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > CentOS > CentOS

 
 
LinkBack Thread Tools
 
Old 10-16-2008, 04:48 PM
"Mag Gam"
 
Default strict memory

Hello All:

Running 5.2 at our university. We have several student's processes
that take up too much memory. Our system have 64G of RAM and some
processes take close to 32-48G of RAM. This is causing many problems
for others. I was wondering if there is a way to restrict memory usage
per process? If the process goes over 32G simply kill it. Any thoughts
or ideas?

TIA
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 10-16-2008, 04:56 PM
"Filipe Brandenburger"
 
Default strict memory

Hi,

On Thu, Oct 16, 2008 at 12:48, Mag Gam <magawake@gmail.com> wrote:
> I was wondering if there is a way to restrict memory usage
> per process? If the process goes over 32G simply kill it.

You can limit the amount of virtual memory of a process with "ulimit
-v". See "help ulimit" or "man bash" for more details.

HTH,
Filipe
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 10-16-2008, 04:59 PM
"Mag Gam"
 
Default strict memory

Yes. Thanks. I was thinking of that too. Any other suggestions?

TIA


On Thu, Oct 16, 2008 at 12:56 PM, Filipe Brandenburger
<filbranden@gmail.com> wrote:
> Hi,
>
> On Thu, Oct 16, 2008 at 12:48, Mag Gam <magawake@gmail.com> wrote:
>> I was wondering if there is a way to restrict memory usage
>> per process? If the process goes over 32G simply kill it.
>
> You can limit the amount of virtual memory of a process with "ulimit
> -v". See "help ulimit" or "man bash" for more details.
>
> HTH,
> Filipe
> _______________________________________________
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 10-16-2008, 04:59 PM
Joshua Baker-LePain
 
Default strict memory

On Thu, 16 Oct 2008 at 12:48pm, Mag Gam wrote


Running 5.2 at our university. We have several student's processes
that take up too much memory. Our system have 64G of RAM and some
processes take close to 32-48G of RAM. This is causing many problems
for others. I was wondering if there is a way to restrict memory usage
per process? If the process goes over 32G simply kill it. Any thoughts
or ideas?


Have a look at /etc/security/limits.conf.

--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 10-16-2008, 05:00 PM
John R Pierce
 
Default strict memory

Mag Gam wrote:

Hello All:

Running 5.2 at our university. We have several student's processes
that take up too much memory. Our system have 64G of RAM and some
processes take close to 32-48G of RAM. This is causing many problems
for others. I was wondering if there is a way to restrict memory usage
per process? If the process goes over 32G simply kill it. Any thoughts
or ideas?




In /etc/profile, use "ulimit -v NNNN" (in kilobytes) to limit the max
virtual of all processes spawned by that shell



32G per process on a 64G machine sounds like a bit much. wouldn't a
limit more like 4GB per user session be more appropriate on a multiuser
system?

_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 10-16-2008, 05:37 PM
"Mag Gam"
 
Default strict memory

Hi John:

Well, we run a lot of statistical analysis and our code loads a lot of
data into a vector for fast calculations. I am not sure how else to do
these calculations fast without loading it into memory. Thats why we
have to do it this way.

TIA

On Thu, Oct 16, 2008 at 1:00 PM, John R Pierce <pierce@hogranch.com> wrote:
> Mag Gam wrote:
>>
>> Hello All:
>>
>> Running 5.2 at our university. We have several student's processes
>> that take up too much memory. Our system have 64G of RAM and some
>> processes take close to 32-48G of RAM. This is causing many problems
>> for others. I was wondering if there is a way to restrict memory usage
>> per process? If the process goes over 32G simply kill it. Any thoughts
>> or ideas?
>>
>>
>
> In /etc/profile, use "ulimit -v NNNN" (in kilobytes) to limit the max
> virtual of all processes spawned by that shell
>
>
> 32G per process on a 64G machine sounds like a bit much. wouldn't a limit
> more like 4GB per user session be more appropriate on a multiuser system?
> _______________________________________________
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 10-16-2008, 05:42 PM
John R Pierce
 
Default strict memory

Mag Gam wrote:

Hi John:

Well, we run a lot of statistical analysis and our code loads a lot of
data into a vector for fast calculations. I am not sure how else to do
these calculations fast without loading it into memory. Thats why we
have to do it this way.




well, if you got several processes that each need 32GB in a 64GB
machine, you're gonna end up swapping.


the traditional way of doing this sort of thing on limited memory
machines was to take a sequential pass through the data, calculating the
statistics on the fly. I know that kind of thing is very difficult for
some algorithms (FFT's are notorious for being unfriendly to sequential
processing), but for many algorithms, a few sequential passes of
calculations can be /faster/ than random access and swapping when theres
memory/process contention.

_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 10-22-2008, 05:58 AM
"Amos Shapira"
 
Default strict memory

2008/10/17 Mag Gam <magawake@gmail.com>:
> Hi John:
>
> Well, we run a lot of statistical analysis and our code loads a lot of
> data into a vector for fast calculations. I am not sure how else to do
> these calculations fast without loading it into memory. Thats why we
> have to do it this way.

About 15 years ago I changed an application on SGI IRIX from using
text files scanf(3)'ed into memory
(with floating point numbers in them) to binary files mmap(2)'ed into
memory. Processing time was cut down by over 95% and did much more in
the 5% left (e.g. allow interactive real-time viewing of different
"frames" of data).

Using mmap'ed files means that the system will know that these pages
are backed by blocks on the file system and therefore it won't take up
so much "buffer" space which needs to be writen out into the swap
partition whenever the memory buffer is needed for something else,
only disk cache space which can be just freed if the buffer was only
read. You can also benefit if multiple processes access same file -
they'll share the buffer in memory too.

It's not a silver bullet, there are still issues with too random
access causing the system the thrash, but at least it won't take up so
much swappable memory, it'll save lots of copying (file->kernel->user
when reading and the other way around when writing), system calls etc.

If you can process data in sequential order and possibly with help of
madvise(2) you can probably squeeze out even more from this option.

--Amos
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 10-23-2008, 11:17 AM
"Mag Gam"
 
Default strict memory

ulimit is good for per process. What about for total usage? If a user
has 5 processing -- each taking up 10G, will account for 50G. Is there
a way to avoid this? Or have the VM be sensative, once its swapping we
want to start killing the processing that take the most memory?

TIA

On Wed, Oct 22, 2008 at 1:58 AM, Amos Shapira <amos.shapira@gmail.com> wrote:
> 2008/10/17 Mag Gam <magawake@gmail.com>:
>> Hi John:
>>
>> Well, we run a lot of statistical analysis and our code loads a lot of
>> data into a vector for fast calculations. I am not sure how else to do
>> these calculations fast without loading it into memory. Thats why we
>> have to do it this way.
>
> About 15 years ago I changed an application on SGI IRIX from using
> text files scanf(3)'ed into memory
> (with floating point numbers in them) to binary files mmap(2)'ed into
> memory. Processing time was cut down by over 95% and did much more in
> the 5% left (e.g. allow interactive real-time viewing of different
> "frames" of data).
>
> Using mmap'ed files means that the system will know that these pages
> are backed by blocks on the file system and therefore it won't take up
> so much "buffer" space which needs to be writen out into the swap
> partition whenever the memory buffer is needed for something else,
> only disk cache space which can be just freed if the buffer was only
> read. You can also benefit if multiple processes access same file -
> they'll share the buffer in memory too.
>
> It's not a silver bullet, there are still issues with too random
> access causing the system the thrash, but at least it won't take up so
> much swappable memory, it'll save lots of copying (file->kernel->user
> when reading and the other way around when writing), system calls etc.
>
> If you can process data in sequential order and possibly with help of
> madvise(2) you can probably squeeze out even more from this option.
>
> --Amos
> _______________________________________________
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 

Thread Tools




All times are GMT. The time now is 06:57 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org