FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > CentOS > CentOS

 
 
LinkBack Thread Tools
 
Old 07-01-2008, 09:52 PM
"Scott Moseman"
 
Default Rescan /dev/sd* without reboot?

I increased the SAN partition size for a given volume. Is there a way
I can have fdisk recognize the new size without a reboot?

Thanks,
Scott
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 07-01-2008, 10:00 PM
"nate"
 
Default Rescan /dev/sd* without reboot?

Scott Moseman wrote:
> I increased the SAN partition size for a given volume. Is there a way
> I can have fdisk recognize the new size without a reboot?

This is an old way of doing it but it's worked fine for me over the
years.

cat /proc/scsi/scsi and find the device that you resized

Make sure the device is not in use(not mounted, not in use by device
mapper, multipathing software, LVM etc), assuming it is not:

echo "scsi remove-single-device X X X X" >/proc/scsi/scsi
echo "scsi add-single-device X X X X" >/proc/scsi/scsi

where X X X X is the id of the device, for example:

Contents of /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 06 Lun: 00
Vendor: PE/PV Model: 1x2 SCSI BP Rev: 1.0
Type: Processor ANSI SCSI revision: 02
Host: scsi0 Channel: 01 Id: 00 Lun: 00
Vendor: MegaRAID Model: LD 0 RAID1 69G Rev: 521S
Type: Direct-Access ANSI SCSI revision: 02

The disk above is the Megaraid volume which is "0 1 0 0".

You can check to be sure that the device disappears from /proc/scsi/scsi
after you remove it, before re-adding it. If the device is multpathed
then remove all instances of it from /proc/scsi/scsi. If you don't
know what ID it is your SAN device should be able to at least tell
you what LUN it's exported as, which should help in tracing down
which disk is which in /proc/scsi/scsi.

Careful with that command, if you remove a disk that is in use you can
seriously hose the system, often times requiring a hard power cycle.

nate

_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 07-01-2008, 10:09 PM
"Joseph L. Casale"
 
Default Rescan /dev/sd* without reboot?

>This is an old way of doing it but it's worked fine for me over the
>years.

I think the new way is documented here:
http://www.linuxjournal.com/article/7321

I am guessing you could rescan it with a less obtrusive method...

jlc
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 07-01-2008, 10:17 PM
John R Pierce
 
Default Rescan /dev/sd* without reboot?

Joseph L. Casale wrote:

This is an old way of doing it but it's worked fine for me over the
years.



I think the new way is documented here:
http://www.linuxjournal.com/article/7321



i've had very good luck with

echo "- - -" > /sys/class/scsi_host/host?/scan

replacing ? with the proper scsi/fiberchannel host channel #

done this on online systems with minimal impact to other in-use drives.


_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 07-02-2008, 12:34 AM
Rainer Duffner
 
Default Rescan /dev/sd* without reboot?

Am 02.07.2008 um 00:17 schrieb John R Pierce:


Joseph L. Casale wrote:

This is an old way of doing it but it's worked fine for me over the
years.



I think the new way is documented here:
http://www.linuxjournal.com/article/7321



i've had very good luck with

echo "- - -" > /sys/class/scsi_host/host?/scan

replacing ? with the proper scsi/fiberchannel host channel #

done this on online systems with minimal impact to other in-use
drives.






Personally, I find this a very sad state of affairs.
Why on earth is there no API to rescan the SCSI-bus (and the fabric)?

There's also Kurt Garloff's rescan-scsi-bus.sh script (haven't used
it in a while and not on RHEL5).


FYI: In W2K3, you enlarge the LUN on the SAN, use diskpart.exe to
enlarge the volume...and that's it!

http://support.microsoft.com/kb/325590/en-us

I can't believe that nobody needs that in Linux-land.
If you enlarge the LUN on the SAN for a Linux-volume, you end-up with
a 2nd partition behind the first - you'd need to do some nasty,
dangerous disklabel-manipulations to fix that.
I end-up just adding another LUN and using LVM to piece them
together. Of course, having multiple LUNs from a SAN in an LVM makes
it next to impossible to create a consistent snapshot (via the SAN's
snapshot functionality) in case the SAN (like all HP EVAs, AFAIK) can
only do one snapshot of a LUN at exactly the same time.
(And lately, we use ZFS and a cheap MSA70 that eliminates most these
inconveniences and happens to save a huge amount of money compared to
a SAN from HP).




cheers,
Rainer
--
Rainer Duffner
CISSP, LPI, MCSE
rainer@ultra-secure.de


_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 07-02-2008, 12:57 AM
"nate"
 
Default Rescan /dev/sd* without reboot?

Rainer Duffner wrote:

> I can't believe that nobody needs that in Linux-land.
> If you enlarge the LUN on the SAN for a Linux-volume, you end-up with
> a 2nd partition behind the first - you'd need to do some nasty,
> dangerous disklabel-manipulations to fix that.
> I end-up just adding another LUN and using LVM to piece them
> together. Of course, having multiple LUNs from a SAN in an LVM makes
> it next to impossible to create a consistent snapshot (via the SAN's
> snapshot functionality) in case the SAN (like all HP EVAs, AFAIK) can
> only do one snapshot of a LUN at exactly the same time.

Increasing amounts of storage arrays support thin provisioning.
I made extensive use of this technology at my last company. Say you
need 100GB today but may need to grow later. You can create a 1TB
volume, export it to the host, and depending on disk usage patterns
optionally create a 100GB LVM. Fill up that LVM, and while you
have a 1TB drive exported to the system, only 100GB of space is
utilized on the array. Increase the size of the LVM with lvextend
and off you go. No fuss no muss (?).

If your application's space usage characteristics are such that
it doesn't consume large amounts of space and then free it, you
can create that 1TB volume off the bat and never have to worry
about extending it(until you get to that 1TB). Or create a 2TB
volume(or bigger). Thin provisioning is a one-way trip, once the
space is allocated(on the array) it cannot be "freed". Though I
read a storage blog where a NetApp guy talked about a utility they
have to reclaim space from TP volumes running on NTFS, haven't
seen anything for linux(and the guy warned it's a very I/O intensive
operation). For me for apps that are sloppy with space I just
restrict their usage with LVM, that way I know I can easily extend
stuff and still control growth with an iron fist if I so desire.

At my last company I achieved 400% over subscription with thin
provisioning. It did take several months of closely watching
the space utilization characteristics of the various applications
to determine the most optimal storage configuration. The vendor
says on average customers save about 50% space using this
technology.

If it turns out you never use more than 100GB, there's nothing
lost, the rest of the space is available to be allocated to
other systems. No waste.

I'm optimistic that in the coming years the standard files systems
will include more intelligence with regards to thin provisioning,
that is being able to mark freed space in such a way that the array
can determine with certainty that it is not being used any more
and reclaim it. And also intelligently re-use recently deleted blocks
before allocating new(to some extent they do this already but it's
not good enough). Thin provisioning has really started to take off
in the past year or so the number of storage vendors supporting it
has gone up 10x. How well it actually works depends on the vendor,
some of the architecture's out there are better than others.

nate

_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 

Thread Tools




All times are GMT. The time now is 08:32 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org