FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Ubuntu > Ubuntu User

 
 
LinkBack Thread Tools
 
Old 02-11-2009, 07:44 PM
Mario Vukelic
 
Default Memory and Paging

On Wed, 2009-02-11 at 13:26 -0700, John Hubbard wrote:
> 1) Have a process running that 'owns' a certain amount of memory (enough
> to run bash/top/kill/pidof and a few other small programs) and keeps
> this memory from being paged out.
> 2) Enough memory set aside for SSHD to allow a new connection.
> 3) Some way to ssh in and access that memory owning process or request
> memory from that process.
>
> Is there any way to do these things?

You want to look into /etc/security/limits.conf which belongs to the
package libpam-modules. There you can limit the memory that is available
to a user to some value that is less than the available memory. Same for
other parameters like cpu usage. See, in particular, here:
http://www.kernel.org/pub/linux/libs/pam/Linux-PAM-html/sag-pam_limits.html
But also check out the root of the document:
http://www.kernel.org/pub/linux/libs/pam/Linux-PAM-html/Linux-PAM_SAG.html


--
ubuntu-users mailing list
ubuntu-users@lists.ubuntu.com
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-users
 
Old 02-11-2009, 07:49 PM
Ansgar Burchardt
 
Default Memory and Paging

Hi,

John Hubbard <ender8282@yahoo.com> writes:

> My computer has some memory. When I need more memory than the computer
> writes some of the stuff in memory to the hard drive to free up the
> memory. This is troublesome because the hard drive is very slow. While
> moving stuff around the computer often slows way down since there is no
> free memory. To fix things I often need to kill the run away task.
> (Usually some code that I have written that is misbehaving or behaving
> properly, but using more memory than I expected.) When in this state,
> it often takes a very long time to ssh into the machine to kill the task
> in question. I am trying to figure out a solution to this problem. It
> seems like I would need to do a few things.

If you know which program grabs much memory and you know it should not
do this when working normally, you can limit the amount of memory
available to this program.

Take a look at the `ulimit' shell command and the `setrlimit(2)'
function.

Regards,
Ansgar

--
PGP: 1024D/595FAD19 739E 2D09 0969 BEA9 9797 B055 DDB0 2FF7 595F AD19


--
ubuntu-users mailing list
ubuntu-users@lists.ubuntu.com
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-users
 
Old 02-11-2009, 07:54 PM
"Jason Crain"
 
Default Memory and Paging

On Wed, February 11, 2009 2:26 pm, John Hubbard wrote:
> My computer has some memory. When I need more memory than the computer
> writes some of the stuff in memory to the hard drive to free up the
> memory. This is troublesome because the hard drive is very slow. While
> moving stuff around the computer often slows way down since there is no
> free memory. To fix things I often need to kill the run away task.
> (Usually some code that I have written that is misbehaving or behaving
> properly, but using more memory than I expected.) When in this state,
> it often takes a very long time to ssh into the machine to kill the task
> in question. I am trying to figure out a solution to this problem. It
> seems like I would need to do a few things.
>
> 1) Have a process running that 'owns' a certain amount of memory (enough
> to run bash/top/kill/pidof and a few other small programs) and keeps
> this memory from being paged out.
> 2) Enough memory set aside for SSHD to allow a new connection.
> 3) Some way to ssh in and access that memory owning process or request
> memory from that process.
>
> Is there any way to do these things? Does someone else have a different
> approach that accomplishes the same thing? How much memory am I talking
> about? Would 5MB be enough? Any other thoughts or comments?

You can use ulimit, a bash builtin command that limits the
memory/filesize/etc. of the shell and any process run in the shell. It is
documented in the bash man page.

--
ubuntu-users mailing list
ubuntu-users@lists.ubuntu.com
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-users
 
Old 02-11-2009, 08:24 PM
Brian McKee
 
Default Memory and Paging

On Wed, Feb 11, 2009 at 3:26 PM, John Hubbard <ender8282@yahoo.com> wrote:

Are you at the machine? While security wise it stinks, you could keep
a root session logged in and running on a virtual terminal.

The obvious solution is adding RAM :-) although that's not always
easy or cheap.

Brian

--
ubuntu-users mailing list
ubuntu-users@lists.ubuntu.com
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-users
 
Old 02-11-2009, 11:21 PM
Rashkae
 
Default Memory and Paging

John Hubbard wrote:
> My computer has some memory. When I need more memory than the computer
> writes some of the stuff in memory to the hard drive to free up the
> memory. This is troublesome because the hard drive is very slow. While
> moving stuff around the computer often slows way down since there is no
> free memory. To fix things I often need to kill the run away task.
> (Usually some code that I have written that is misbehaving or behaving
> properly, but using more memory than I expected.) When in this state,
> it often takes a very long time to ssh into the machine to kill the task
> in question. I am trying to figure out a solution to this problem. It
> seems like I would need to do a few things.
>
> 1) Have a process running that 'owns' a certain amount of memory (enough
> to run bash/top/kill/pidof and a few other small programs) and keeps
> this memory from being paged out.
> 2) Enough memory set aside for SSHD to allow a new connection.
> 3) Some way to ssh in and access that memory owning process or request
> memory from that process.
>
> Is there any way to do these things? Does someone else have a different
> approach that accomplishes the same thing? How much memory am I talking
> about? Would 5MB be enough? Any other thoughts or comments?
>


Do your misbehaving programs have to run as root? If not, create a user
for them, and stick that user with a reasonable memory limit using
ulimit, (sorry, I forget exact details on how to do this, but that
should be enough info to get google going.) That way, when user foo
reaches 500MB of used memory, it won't get anymore, and you can ssh in
to kill bad processes at your leisure.

Otherwise, you can reduce the size of your swap space and disable linux
default of "overcommit" (again, I forget the details, but this one is
easy to do). When an application starts filling ram, you should hit OOM
(out of memory) really fast, and with luck, the kernel will actually do
the right think and kill a memory hungry app. (unfortunately, kernel
OOM handling, in my experience, is a blunt trauma force instrument that
damages far more than just the misbehaving process. But at least that
way you aren't swapping into infinity before the system becomes
accessible again.)

--
ubuntu-users mailing list
ubuntu-users@lists.ubuntu.com
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-users
 

Thread Tools




All times are GMT. The time now is 07:04 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org