Linux Archive

Linux Archive (http://www.linux-archive.org/)
-   CentOS (http://www.linux-archive.org/centos/)
-   -   CentOS 6 Partitioning Map/Schema (http://www.linux-archive.org/centos/570828-centos-6-partitioning-map-schema.html)

Peter Kjellstr÷m 09-01-2011 09:43 AM

CentOS 6 Partitioning Map/Schema
 
On Thursday, September 01, 2011 03:21:25 AM Jonathan Vomacka wrote:
> Good Evening All,
>
> I have a question regarding CentOS 6 server partitioning. Now I know
> there are a lot of different ways to partition the system and different
> opinions depending on the use of the server. I currently have a quad
> core intel system running 8GB of RAM with 1 TB hard drive (single). In
> the past as a FreeBSD user, I have always made a physical volume of the
> root filesystem (/), SWAP, /tmp, /usr, /var, and /home. In the
> partitioning manager I would always specify 10GB for root, 2GB or so for
> SWAP, 20GB var, 50GB usr, 10GB /tmp, and allocate all remaining space to

I don't think the above figures are bad. Then again the CentOS default (/boot
+ /) and then adding your /home may be more flexible. After that, if I split
it further, I'd make a stand alone /var and maybe /tmp. Splitting /usr from /
seems like more trouble than it's worth to me.

Also I'd use LVM for everything but /boot and leave some unused space in the
VG that I could use for lvextend + resize2fs later.

Just my take on it.

> my home directory as my primary data volume (assuming all my
> applications are installed and ran from my home directories). I was
> recently told that this is an old style of partitioning and is not used
> in modern day Linux distributions. So more accurately, here are my
> questions to the list:
>
> 1) What is a good partition map/schema for a server OS where it's
> primary purpose is for a LAMP server, DNS (bind), and possibly gameservers
>
> 2) CentOS docs recommend using 10GB SWAP for 8GB of RAM. 1X the amount
> of physical memory + 2GB added. (Reference:
> http://www.centos.org/docs/5/html/Installation_Guide-en-US/s1-diskpartition
> ing-x86.html). I was told this is ridiculous and will severely slow down
> the system. Is this true?

Disclaimer: the following is based on CentOS-5 and I'm not 100% if all or any
applies to the CentOS-6 kernel.

* Some swap (as opposed to no swap) seems to increase system stability under
OOM conditions (depeding on a lot of factors)

* You'll need at least as much swap as the max stack size you intend to set
(ulimit -s). Usually this is very low but in some instances you need a
significant percentage of your RAM size. An alternative is to set max stack
size to unlimited when needed (which _does not_, thankfully, require an
infinite amount of swap...).

Based on this I'd say just add some swap (like a gig or two) unless you know
you want a high max stack size.

If you left space in your VG you can always add another chunk of swap later.

> If so, what is a good swap space to use for 8GB
> of RAM? The university of MIT recommends making MULTIPLE 2GB swap spaces

This shouldn't really make much difference. Long ago swap size was limited to
2G but I don't even remember if that was per swap or in total.. Either way you
can have 5x 2G or 1x 10G. Linux will balance its usage over all available
swaps so if you have several independant drives then use swaps on both for
maximum performance (although it's my feeling that if you need swap
performance then you're probably doing something wrong...).

> equaling 10GB if this is the case. Please help!
>
> 3) Is EXT4 better or worse to use then XFS for what I am planning to use
> the system for?

Much has been said here. I'd stay with the dist default unless I had specific
reasons. If you need >16T you have to use XFS. If you're on 32-bit you have to
use ext*.

If you're trying to decide based on performance then try it out on your
hardware (where preferably "it" is close to your actual work load).

/Peter

> Thanks in advance for all your help guys
>
> Kind Regards,
> Jonathan Vomacka
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos

John Doe 09-01-2011 03:17 PM

CentOS 6 Partitioning Map/Schema
 
From: Jonathan Vomacka <juvix88@gmail.com>

> I have a question regarding CentOS 6 server partitioning.

http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Installation_Guide/s2-diskpartrecommend-x86.html
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos

Lamar Owen 09-01-2011 03:29 PM

CentOS 6 Partitioning Map/Schema
 
On Wednesday, August 31, 2011 09:21:25 PM Jonathan Vomacka wrote:
> I was
> recently told that this is an old style of partitioning and is not used
> in modern day Linux distributions.

The only thing old-style I saw in your list was the separate /usr partition. I like having separate /var, /tmp, and /home. /var because lots of data resides there that can fill partitions quickly; logs, for instance. I have built machines where /var/log was separate, specifically for that reason.

> So more accurately, here are my
> questions to the list:
>
> 1) What is a good partition map/schema for a server OS where it's
> primary purpose is for a LAMP server, DNS (bind), and possibly gameservers

Splitting out filesystems into partitions makes sense primarily, in my opinion and experience, in seven basic aspects:
1.) I/O load balancing across multiple spindles and/or controllers;
2.) Disk space isolation in case of filesystem 'overflow' (that is, you don't want your mail spool in /var/spool/mail overflowing to be able to corrupt an online database in, say, /var/lib/pgsql/data/base) (and while quotas can help with this when two trees are not fully isolated, filesystems in different partitons/logical volumes have hard overflow isolation);
3.) In the case of really large data stores with dynamic data, isolating the impact of filesystem corruption;
4.) The ability to stagger fsck's between boots (the fsck time doesn't seem to increase linearly with filesystem size);
5.) I/O 'tiering' (like EMC's FAST) where you can allocate your fastest storage to the most rapidly changing data, and slower storage to data that doesn't change frequently;
6.) Putting things into separate filesystem forces the admin to really have to think through and design the system taking into account all the requirements, instead of just throwing it all together and then wondering why performance is suboptimal;
7.) Filesystems can be mounted with options specific to their use cases, and using filesystem technology appropriate to the use case (noexec, for instance, on filesystems that have no business having executables on them; enabling/disabling journalling and other options as appropriate, and using XFS, ext4, etc as appropriate, just to mentiona a few things).

> 2) CentOS docs recommend using 10GB SWAP for 8GB of RAM. 1X the amount
> of physical memory + 2GB added.

If you put swap on LVM and give yourself room to grow you can increase swap space size at will if you should find you need to do so. Larger RAM (and virtual RAM embodied by swap) does not always make things faster. I have a private e-mail from an admin of a large website showing severe MySQL performance issues that were reduced by making the RAM size smaller (turned out to be a caching mismanagement thing with poorly written queries that caused it).

Consider swap to be a safety buffer; the Linux kernel is by default configured to overcommit memory, and swap exists to prevet the oom-killer from reaping critical processes in this situation. Tuning swap size and the 'swappiness' of the kernel along with the overcommit policy should be done together; the default settings produce the default recommendation of 'memory size plus 2GB' that was for CentOS 5. Not too long ago, the recommendation was for swap to be twice the memory size.

Multiple swap partitions can improve performance if those partitions are on different spindles; however, this reduces reliability, too. I don't have any experience with benchmarking the performance of multiple 2GB swap spaces; I'd find results of such benchmarks to be useful information.

> 3) Is EXT4 better or worse to use then XFS for what I am planning to use
> the system for?

That depends; consult some file system comparisons (the wikipedia file system comparison article is a good starting place). I've used both; and I still use both. XFS as a filesystem is older and presumably more mature than ext4, but age is not the only indicator of something that will work for you. One thing to remember is that XFS filesystems cannot currently be reduced in size, only increased. Ext4 can go either way if you realize you made too large of a filesystem.

XFS is very fast to create, but repairing requires absolutely the most RAM of any recovery process I've ever seen. XFS has seen a lot of use in the field, particularly with large SGI boxes (Altix series, primarily) running Linux (with the requisite 'lots of RAM' required for repair/recovery....

XFS currently is the only one where I have successfully made a large than 16TB filesystem. Don't try that on a 32-bit system (in fact, if you care about data integrity, don't use XFS on a 32-bit system at all, unless you have rebuilt the kernel with 8k stacks). The mkfs.xfs on a greater than 16TB partition/logical volume will execute successfully on a 32-bit system (the last time I tried it), but as soon as you go over 16TB with your data you will no longer be able to mount the filesystem. The wisdom of making a greater than 16TB filesystem of any type is left as an exercise for the reader....
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos

Jonathan Vomacka 09-01-2011 05:34 PM

CentOS 6 Partitioning Map/Schema
 
Lamar,

Excellent email. Thank you so much you have been very informative!!!

On 9/1/2011 11:29 AM, Lamar Owen wrote:
> On Wednesday, August 31, 2011 09:21:25 PM Jonathan Vomacka wrote:
>> I was
>> recently told that this is an old style of partitioning and is not used
>> in modern day Linux distributions.
>
> The only thing old-style I saw in your list was the separate /usr partition. I like having separate /var, /tmp, and /home. /var because lots of data resides there that can fill partitions quickly; logs, for instance. I have built machines where /var/log was separate, specifically for that reason.
>
>> So more accurately, here are my
>> questions to the list:
>>
>> 1) What is a good partition map/schema for a server OS where it's
>> primary purpose is for a LAMP server, DNS (bind), and possibly gameservers
>
> Splitting out filesystems into partitions makes sense primarily, in my opinion and experience, in seven basic aspects:
> 1.) I/O load balancing across multiple spindles and/or controllers;
> 2.) Disk space isolation in case of filesystem 'overflow' (that is, you don't want your mail spool in /var/spool/mail overflowing to be able to corrupt an online database in, say, /var/lib/pgsql/data/base) (and while quotas can help with this when two trees are not fully isolated, filesystems in different partitons/logical volumes have hard overflow isolation);
> 3.) In the case of really large data stores with dynamic data, isolating the impact of filesystem corruption;
> 4.) The ability to stagger fsck's between boots (the fsck time doesn't seem to increase linearly with filesystem size);
> 5.) I/O 'tiering' (like EMC's FAST) where you can allocate your fastest storage to the most rapidly changing data, and slower storage to data that doesn't change frequently;
> 6.) Putting things into separate filesystem forces the admin to really have to think through and design the system taking into account all the requirements, instead of just throwing it all together and then wondering why performance is suboptimal;
> 7.) Filesystems can be mounted with options specific to their use cases, and using filesystem technology appropriate to the use case (noexec, for instance, on filesystems that have no business having executables on them; enabling/disabling journalling and other options as appropriate, and using XFS, ext4, etc as appropriate, just to mentiona a few things).
>
>> 2) CentOS docs recommend using 10GB SWAP for 8GB of RAM. 1X the amount
>> of physical memory + 2GB added.
>
> If you put swap on LVM and give yourself room to grow you can increase swap space size at will if you should find you need to do so. Larger RAM (and virtual RAM embodied by swap) does not always make things faster. I have a private e-mail from an admin of a large website showing severe MySQL performance issues that were reduced by making the RAM size smaller (turned out to be a caching mismanagement thing with poorly written queries that caused it).
>
> Consider swap to be a safety buffer; the Linux kernel is by default configured to overcommit memory, and swap exists to prevet the oom-killer from reaping critical processes in this situation. Tuning swap size and the 'swappiness' of the kernel along with the overcommit policy should be done together; the default settings produce the default recommendation of 'memory size plus 2GB' that was for CentOS 5. Not too long ago, the recommendation was for swap to be twice the memory size.
>
> Multiple swap partitions can improve performance if those partitions are on different spindles; however, this reduces reliability, too. I don't have any experience with benchmarking the performance of multiple 2GB swap spaces; I'd find results of such benchmarks to be useful information.
>
>> 3) Is EXT4 better or worse to use then XFS for what I am planning to use
>> the system for?
>
> That depends; consult some file system comparisons (the wikipedia file system comparison article is a good starting place). I've used both; and I still use both. XFS as a filesystem is older and presumably more mature than ext4, but age is not the only indicator of something that will work for you. One thing to remember is that XFS filesystems cannot currently be reduced in size, only increased. Ext4 can go either way if you realize you made too large of a filesystem.
>
> XFS is very fast to create, but repairing requires absolutely the most RAM of any recovery process I've ever seen. XFS has seen a lot of use in the field, particularly with large SGI boxes (Altix series, primarily) running Linux (with the requisite 'lots of RAM' required for repair/recovery....
>
> XFS currently is the only one where I have successfully made a large than 16TB filesystem. Don't try that on a 32-bit system (in fact, if you care about data integrity, don't use XFS on a 32-bit system at all, unless you have rebuilt the kernel with 8k stacks). The mkfs.xfs on a greater than 16TB partition/logical volume will execute successfully on a 32-bit system (the last time I tried it), but as soon as you go over 16TB with your data you will no longer be able to mount the filesystem. The wisdom of making a greater than 16TB filesystem of any type is left as an exercise for the reader....
> _______________________________________________
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos

Jonathan Vomacka 09-01-2011 05:38 PM

CentOS 6 Partitioning Map/Schema
 
John Doe,

Thanks, This is a good read and makes me feel better about splitting
partitions.

On 9/1/2011 11:17 AM, John Doe wrote:
> From: Jonathan Vomacka<juvix88@gmail.com>
>
>> I have a question regarding CentOS 6 server partitioning.
>
> http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Installation_Guide/s2-diskpartrecommend-x86.html
> _______________________________________________
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos

Devin Reade 09-02-2011 04:03 PM

CentOS 6 Partitioning Map/Schema
 
You've already received some good responses so I won't rehash a
lot of what was said. However here are few more comments without
a lot of backing detail (but it should give you enough info to
google for detail):

1. Despite the RedHat link someone provided, I think the advice of
putting almost everything on the root filesystem is a lot of
bunk, at least for servers. The old arguments for separate
filesystems still apply. I suspect that the single filesystem
perspective is coming from desktop scenarios, and especially
laptop users and those coming from MS Windows.

2. Putting /boot on its own filesystem and using LVM for everything
else is a generally good idea from both the management and
snapshot perspectives as someone previously described. However be
aware that most (if not all) LVM configurations will disable
write barriers -- this is probably mostly of interest for when
you're running a database. You need to put on your combined
DBA and sysadmin hat, have a look at your underlying disks,
disk controller, filesystem stack, database, UPS/powerfail
monitoring, and budget to see where your balancing point is.
Yes, I have databases on LVM on top of RAID on top of SATA;
but it's better to know your risks rather than having them
be a surprise.

3. Pay attention to whether your disks are using the old 512 byte
sector size or the new 4k sector size (sometimes called advanced
disk format), and whether or not your disks lie to the OS about
the sector size. The RAID, other MD layers, and filesystem
need to know the truth or you can run into performance and/or
lifespan issues.

4. Regarding swap: Yes, having it is still a good idea under most
circumstances. The old "2 * physical memory" rule no longer applies.
Follow the sizing guidelines from RedHat that someone posted.
The kernel is smart enough to use it when necessary and avoid it
otherwise. Having it can get your server through unusual circumstances
without crashing but you should have enough memory that you're not
paging under normal circumstances. See also point #6.

5. Consider encrypting swap. See crypttab(5), including the comments
about using /dev/urandom for the key.

6. Putting /tmp on tmpfs is a good idea in that it ensures that it
gets cleaned out at least when the system reboots. (Running cron
jobs to clear it out periodically can cause problems; under some
circumstances.) This is a good argument to have swap; you can
use tmpfs without a significant impact of /tmp using up physical
RAM. Also see the 'tmp' option in crypttab(5).

7. Under CentOS 5 having less than 2G for /var could cause problems
with updates, especially between minor versions. I've increased
my minimum to 4G under RHEL6 due to kdump concerns.

Devin

_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos

Jonathan Vomacka 09-02-2011 07:14 PM

CentOS 6 Partitioning Map/Schema
 
Thank you to everyone who responded and contributed to this topic. I
appreciate it greatly!

On 9/2/2011 12:03 PM, Devin Reade wrote:
> You've already received some good responses so I won't rehash a
> lot of what was said. However here are few more comments without
> a lot of backing detail (but it should give you enough info to
> google for detail):
>
> 1. Despite the RedHat link someone provided, I think the advice of
> putting almost everything on the root filesystem is a lot of
> bunk, at least for servers. The old arguments for separate
> filesystems still apply. I suspect that the single filesystem
> perspective is coming from desktop scenarios, and especially
> laptop users and those coming from MS Windows.
>
> 2. Putting /boot on its own filesystem and using LVM for everything
> else is a generally good idea from both the management and
> snapshot perspectives as someone previously described. However be
> aware that most (if not all) LVM configurations will disable
> write barriers -- this is probably mostly of interest for when
> you're running a database. You need to put on your combined
> DBA and sysadmin hat, have a look at your underlying disks,
> disk controller, filesystem stack, database, UPS/powerfail
> monitoring, and budget to see where your balancing point is.
> Yes, I have databases on LVM on top of RAID on top of SATA;
> but it's better to know your risks rather than having them
> be a surprise.
>
> 3. Pay attention to whether your disks are using the old 512 byte
> sector size or the new 4k sector size (sometimes called advanced
> disk format), and whether or not your disks lie to the OS about
> the sector size. The RAID, other MD layers, and filesystem
> need to know the truth or you can run into performance and/or
> lifespan issues.
>
> 4. Regarding swap: Yes, having it is still a good idea under most
> circumstances. The old "2 * physical memory" rule no longer applies.
> Follow the sizing guidelines from RedHat that someone posted.
> The kernel is smart enough to use it when necessary and avoid it
> otherwise. Having it can get your server through unusual circumstances
> without crashing but you should have enough memory that you're not
> paging under normal circumstances. See also point #6.
>
> 5. Consider encrypting swap. See crypttab(5), including the comments
> about using /dev/urandom for the key.
>
> 6. Putting /tmp on tmpfs is a good idea in that it ensures that it
> gets cleaned out at least when the system reboots. (Running cron
> jobs to clear it out periodically can cause problems; under some
> circumstances.) This is a good argument to have swap; you can
> use tmpfs without a significant impact of /tmp using up physical
> RAM. Also see the 'tmp' option in crypttab(5).
>
> 7. Under CentOS 5 having less than 2G for /var could cause problems
> with updates, especially between minor versions. I've increased
> my minimum to 4G under RHEL6 due to kdump concerns.
>
> Devin
>
> _______________________________________________
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


All times are GMT. The time now is 09:47 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.