FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Redhat > Crash Utility

 
 
LinkBack Thread Tools
 
Old 04-11-2012, 03:01 PM
Dave Anderson
 
Default add a new command: ipcs

----- Original Message -----
> Sorry, I made some mistake.
>
> At 2012-4-11 17:06, qiaonuohan wrote:
> >
> > what is struct nsproxy? Or is there any symbol referring to ipc_ns?
>
> I want to know how does the kernel get struct ipc_ids. the previous and
> the hind kernel uses current->nsproxy->ipc_ns to get the pointer to
> struct ipc_namespace and then use a macro shm_ids to get struct ipc_ids.
> What about this kernel?

It does the same thing. But in the failure case, it appears that the
task_struct.nsproxy pointer that you are using is NULL:

> >
> > (c) On this 2.6.36-0.16.rc3.git0.fc15 Fedora kernel, it shows:
> >
> > ------ Shared Memory Segments ------
> > KEY SHMID UID PERMS BYTES NATTCH
> > STATUS
> > ipcs: invalid kernel virtual address: 10 type: "nsproxy.ipc_ns"
>

> >> (d) On *all* RHEL4 2.6.9-era and SLES9 2.6.5-era kernels, the
> >> command
> >> fail like this:
> >>
> >> ------ Shared Memory Segments ------
> >> KEY SHMID UID PERMS BYTES NATTCH STATUS
> >> ipcs: invalid structure member offset: ipc_id_ary_p
> >> FILE: ipcs.c LINE: 540 FUNCTION: ipc_search_array()
> >>
> >> or this:
> >>
> >> ------ Shared Memory Segments ------
> >> KEY SHMID UID PERMS BYTES NATTCH STATUS
> >> (none allocated)------ Semaphore Arrays --------
> >> KEY SEMID UID PERMS NSEMS
> >> ipcs: invalid structure member offset: ipc_id_ary_p
> >> FILE: ipcs.c LINE: 540 FUNCTION: ipc_search_array()
> >>
> >
> > what is struct ipc_id? And what is entries in struct ipc_id or something
> > similar to it?
>
> I mean struct ipc_ids, not ipc_id.

RHEL4/linux-2.6.9:

crash> ipc_ids
struct ipc_ids {
int size;
int in_use;
int max_id;
short unsigned int seq;
short unsigned int seq_max;
struct semaphore sem;
struct ipc_id *entries;
}
SIZE: 56
crash>

Dave

--
Crash-utility mailing list
Crash-utility@redhat.com
https://www.redhat.com/mailman/listinfo/crash-utility
 
Old 04-11-2012, 03:14 PM
Wen Congyang
 
Default add a new command: ipcs

At 2012/4/11 22:50, Dave Anderson Wrote:



----- Original Message -----

Hello Dave,

I cannot get all kernels at hand. So I have to ask you about the code.
Please show me.


Why not? Just download the upstream kernels from here:

http://www.kernel.org/pub/linux/kernel/v2.6/



(a) On these kernel versions:

2.6.9-89.ELxenU
2.6.15-1.2054_FC5
2.6.16.33-xen
2.6.18-1.2714.el5xen
2.6.18-36.el5xen
2.6.18-58.el5xen
2.6.18-152.el5xen
2.6.31 uniprocessor kernel

the command fails immediatedly with this error:

ipcs: cannot resolve "hugetlbfs_file_operations"


(b) On *all* RHEL5 2.6.18-era kernels, the message queue display
always fails like this:

------ Message Queues --------
KEY MSQID UID PERMS USED-BYTES
MESSAGES
ipcs: invalid structure member offset: kern_ipc_perm_id
FILE: ipcs.c LINE: 899 FUNCTION: get_msg_info()


I want to see the struct msg_queue and struct struct kern_ipc_perm.


Here is the output from a RHEL5 kernel:

crash> msg_queue
struct msg_queue {
struct kern_ipc_perm q_perm;
int q_id;
time_t q_stime;
time_t q_rtime;
time_t q_ctime;
long unsigned int q_cbytes;
long unsigned int q_qnum;
long unsigned int q_qbytes;
pid_t q_lspid;
pid_t q_lrpid;
struct list_head q_messages;
struct list_head q_receivers;
struct list_head q_senders;
}
SIZE: 160
crash> kern_ipc_perm
struct kern_ipc_perm {
spinlock_t lock;
int deleted;
key_t key;
uid_t uid;
gid_t gid;
uid_t cuid;
gid_t cgid;
mode_t mode;
long unsigned int seq;
void *security;
}
SIZE: 48
crash>

which is the same as the upstream 2.6.18 kernel.


Ahh, I khow the reason now: msg_queue_q_id is not initialized!!!!





(c) On this 2.6.36-0.16.rc3.git0.fc15 Fedora kernel, it shows:

------ Shared Memory Segments ------
KEY SHMID UID PERMS BYTES NATTCH
STATUS
ipcs: invalid kernel virtual address: 10 type: "nsproxy.ipc_ns"


what is struct nsproxy? Or is there any symbol referring to ipc_ns?


crash> nsproxy
struct nsproxy {
atomic_t count;
struct uts_namespace *uts_ns;
struct ipc_namespace *ipc_ns;
struct mnt_namespace *mnt_ns;
struct pid_namespace *pid_ns;
struct net *net_ns;
}
SIZE: 48
crash>

It's the same as upstream 2.6.36, but it's not the offset that's invalid,
it's the NULL "nsproxy" address.


I am surprised that nsproxy is NULL.

Each user task belongs to a namesapce, so current_task.nsproxy should not
be NULL. I guess the current task may be a kernel thread in your test.

Thanks
Wen Congyang








(d) On *all* RHEL4 2.6.9-era and SLES9 2.6.5-era kernels, the
command fail like this:

------ Shared Memory Segments ------
KEY SHMID UID PERMS BYTES NATTCH
STATUS
ipcs: invalid structure member offset: ipc_id_ary_p
FILE: ipcs.c LINE: 540 FUNCTION: ipc_search_array()

or this:

------ Shared Memory Segments ------
KEY SHMID UID PERMS BYTES NATTCH
STATUS
(none allocated)------ Semaphore Arrays --------
KEY SEMID UID PERMS NSEMS
ipcs: invalid structure member offset: ipc_id_ary_p
FILE: ipcs.c LINE: 540 FUNCTION: ipc_search_array()



what is struct ipc_id? And what is entries in struct ipc_id or something
similar to it?


This is from a RHEL4 kernel -- and the upstream 2.6.9 kernel is the same:

crash> ipc_id
struct ipc_id {
struct kern_ipc_perm *p;
}
SIZE: 8
crash> kern_ipc_perm
struct kern_ipc_perm {
spinlock_t lock;
int deleted;
key_t key;
uid_t uid;
gid_t gid;
uid_t cuid;
gid_t cgid;
mode_t mode;
long unsigned int seq;
void *security;
}
SIZE: 56
crash>

Dave




--
Crash-utility mailing list
Crash-utility@redhat.com
https://www.redhat.com/mailman/listinfo/crash-utility



--
Crash-utility mailing list
Crash-utility@redhat.com
https://www.redhat.com/mailman/listinfo/crash-utility
 
Old 04-11-2012, 03:56 PM
Dave Anderson
 
Default add a new command: ipcs

----- Original Message -----
> At 2012/4/11 22:50, Dave Anderson Wrote:
> >
> >
> > ----- Original Message -----
> >> Hello Dave,
> >>
> >> I cannot get all kernels at hand. So I have to ask you about the code.
> >> Please show me.
> >
> > Why not? Just download the upstream kernels from here:
> >
> > http://www.kernel.org/pub/linux/kernel/v2.6/
> >
> >>>
> >>> (a) On these kernel versions:
> >>>
> >>> 2.6.9-89.ELxenU
> >>> 2.6.15-1.2054_FC5
> >>> 2.6.16.33-xen
> >>> 2.6.18-1.2714.el5xen
> >>> 2.6.18-36.el5xen
> >>> 2.6.18-58.el5xen
> >>> 2.6.18-152.el5xen
> >>> 2.6.31 uniprocessor kernel
> >>>
> >>> the command fails immediatedly with this error:
> >>>
> >>> ipcs: cannot resolve "hugetlbfs_file_operations"
> >>>
> >>>
> >>> (b) On *all* RHEL5 2.6.18-era kernels, the message queue display
> >>> always fails like this:
> >>>
> >>> ------ Message Queues --------
> >>> KEY MSQID UID PERMS USED-BYTES
> >>> MESSAGES
> >>> ipcs: invalid structure member offset: kern_ipc_perm_id
> >>> FILE: ipcs.c LINE: 899 FUNCTION: get_msg_info()
> >>
> >> I want to see the struct msg_queue and struct struct
> >> kern_ipc_perm.
> >
> > Here is the output from a RHEL5 kernel:
> >
> > crash> msg_queue
> > struct msg_queue {
> > struct kern_ipc_perm q_perm;
> > int q_id;
> > time_t q_stime;
> > time_t q_rtime;
> > time_t q_ctime;
> > long unsigned int q_cbytes;
> > long unsigned int q_qnum;
> > long unsigned int q_qbytes;
> > pid_t q_lspid;
> > pid_t q_lrpid;
> > struct list_head q_messages;
> > struct list_head q_receivers;
> > struct list_head q_senders;
> > }
> > SIZE: 160
> > crash> kern_ipc_perm
> > struct kern_ipc_perm {
> > spinlock_t lock;
> > int deleted;
> > key_t key;
> > uid_t uid;
> > gid_t gid;
> > uid_t cuid;
> > gid_t cgid;
> > mode_t mode;
> > long unsigned int seq;
> > void *security;
> > }
> > SIZE: 48
> > crash>
> >
> > which is the same as the upstream 2.6.18 kernel.
>
> Ahh, I khow the reason now: msg_queue_q_id is not initialized!!!!
>
> >
> >>>
> >>> (c) On this 2.6.36-0.16.rc3.git0.fc15 Fedora kernel, it shows:
> >>>
> >>> ------ Shared Memory Segments ------
> >>> KEY SHMID UID PERMS BYTES
> >>> NATTCH
> >>> STATUS
> >>> ipcs: invalid kernel virtual address: 10 type:
> >>> "nsproxy.ipc_ns"
> >>
> >> what is struct nsproxy? Or is there any symbol referring to
> >> ipc_ns?
> >
> > crash> nsproxy
> > struct nsproxy {
> > atomic_t count;
> > struct uts_namespace *uts_ns;
> > struct ipc_namespace *ipc_ns;
> > struct mnt_namespace *mnt_ns;
> > struct pid_namespace *pid_ns;
> > struct net *net_ns;
> > }
> > SIZE: 48
> > crash>
> >
> > It's the same as upstream 2.6.36, but it's not the offset that's invalid,
> > it's the NULL "nsproxy" address.
>
> I am surprised that nsproxy is NULL.
>
> Each user task belongs to a namesapce, so current_task.nsproxy should not
> be NULL. I guess the current task may be a kernel thread in your test.
>
> Thanks
> Wen Congyang

Actually, even kernel threads have a valid task->nsproxy setting.

But checking into this a bit further, it's not a kernel thread,
but an exiting thread. Note the invocation-time warning that
the active, panic, task has been removed from the PID hash:

$ crash vmcore.2.6.36-0.16.rc3.git0.fc15.x86_64 vmlinux-2.6.36-0.16.rc3.git0.fc15.x86_64.gz

crash 6.0.6rc5
Copyright (C) 2002-2012 Red Hat, Inc.
Copyright (C) 2004, 2005, 2006 IBM Corporation
Copyright (C) 1999-2006 Hewlett-Packard Co
Copyright (C) 2005, 2006 Fujitsu Limited
Copyright (C) 2006, 2007 VA Linux Systems Japan K.K.
Copyright (C) 2005 NEC Corporation
Copyright (C) 1999, 2002, 2007 Silicon Graphics, Inc.
Copyright (C) 1999, 2000, 2001, 2002 Mission Critical Linux, Inc.
This program is free software, covered by the GNU General Public License,
and you are welcome to change it and/or distribute copies of it under
certain conditions. Enter "help copying" to see the conditions.
This program has absolutely no warranty. Enter "help warranty" for details.

GNU gdb (GDB) 7.3.1
Copyright (C) 2011 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-unknown-linux-gnu"...

please wait... (determining panic task)
WARNING: active task ffff88001d190000 on cpu 0 not found in PID hash

KERNEL: vmlinux-2.6.36-0.16.rc3.git0.fc15.x86_64.gz
DUMPFILE: vmcore.2.6.36-0.16.rc3.git0.fc15.x86_64
CPUS: 1
DATE: Fri Sep 24 20:46:58 2010
UPTIME: 00:27:55
LOAD AVERAGE: 1.53, 1.80, 1.56
TASKS: 118
NODENAME: dyna0.home.front
RELEASE: 2.6.36-0.16.rc3.git0.fc15.x86_64
VERSION: #1 SMP Fri Sep 3 16:00:27 UTC 2010
MACHINE: x86_64 (1600 Mhz)
MEMORY: 510.7 MB
PANIC: ""
PID: 7124
COMMAND: "hardlink"
TASK: ffff88001d190000 [THREAD_INFO: ffff88001b17a000]
CPU: 0
STATE: EXIT_DEAD (PANIC)

crash>

Note that the "ipcs" command uses the current task, whose task_struct
address is ffff88001d190000 in this particular case, and therefore the
task_struct.nsproxy address is ffff88001d1905f0:

crash> task -R nsproxy
PID: 7124 TASK: ffff88001d190000 CPU: 0 COMMAND: "hardlink"
nsproxy = 0x0,
crash>

Resulting in the error:

crash> set debug 4
debug: 4
text hit rate: 62% (3143 of 5040)
crash> ipcs
------ Shared Memory Segments ------
KEY SHMID UID PERMS BYTES NATTCH STATUS
<readmem: ffff88001d1905f0, KVADDR, "task_struct.nsproxy", 8, (FOE), 7fffaf719e98>
<read_kdump: addr: ffff88001d1905f0 paddr: 1d1905f0 cnt: 8>
<readmem: 10, KVADDR, "nsproxy.ipc_ns", 8, (FOE), 7fffaf719e90>
ipcs: invalid kernel virtual address: 10 type: "nsproxy.ipc_ns"
text hit rate: 62% (3143 of 5040)
crash>

The "ipcs" code may have to do something similar to what the "mount"
command does here in cmd_mount():

/* find a context */
pid = 1;
while ((namespace_context = pid_to_context(pid)) == NULL)
pid++;

where namespace_context is used later in get_mount_list():

} else if (VALID_MEMBER(task_struct_nsproxy)) {
tc = namespace_context;

readmem(tc->task + OFFSET(task_struct_nsproxy), KVADDR,
&nsproxy, sizeof(void *), "task nsproxy",
FAULT_ON_ERROR);
if (!readmem(nsproxy + OFFSET(nsproxy_mnt_ns), KVADDR,
&mnt_ns, sizeof(void *), "nsproxy mnt_ns",
RETURN_ON_ERROR|QUIET))
error(FATAL, "cannot determine mount list location!
");
if (!readmem(mnt_ns + OFFSET(mnt_namespace_root), KVADDR,
&root, sizeof(void *), "mnt_namespace root",
RETURN_ON_ERROR|QUIET))
error(FATAL, "cannot determine mount list location!
");

Usually pid 1 would suffice, but as I recall, Bob Montgomery ran into
a vmcore where pid 1 wasn't found in the PID hash, so we added this so
that it keeps looking until it found one?

Dave

--
Crash-utility mailing list
Crash-utility@redhat.com
https://www.redhat.com/mailman/listinfo/crash-utility
 
Old 04-11-2012, 04:20 PM
Dave Anderson
 
Default add a new command: ipcs

----- Original Message -----

> Usually pid 1 would suffice, but as I recall, Bob Montgomery ran into
> a vmcore where pid 1 wasn't found in the PID hash, so we added this so
> that it keeps looking until it found one?

Or perhaps just use the default "init_nsproxy", or even better, go
directly to "init_ipc_ns":

struct nsproxy init_nsproxy = {
.count = ATOMIC_INIT(1),
.uts_ns = &init_uts_ns,
#if defined(CONFIG_POSIX_MQUEUE) || defined(CONFIG_SYSVIPC)
.ipc_ns = &init_ipc_ns,
#endif
.mnt_ns = NULL,
.pid_ns = &init_pid_ns,
#ifdef CONFIG_NET
.net_ns = &init_net,
#endif
};

Dave


--
Crash-utility mailing list
Crash-utility@redhat.com
https://www.redhat.com/mailman/listinfo/crash-utility
 
Old 04-12-2012, 09:28 AM
qiaonuohan
 
Default add a new command: ipcs

Hello Dave,

At 2012-4-11 21:58, Dave Anderson wrote:

Then, if the command is eventually accepted as a built-in, it will
be a simple matter of removing the stuff shown above, changing
all the_OFFSET_() and_SIZE_() callers to OFFSET() and SIZE(), and
adding the members to the offset_table and size_table structures.


I know I must post it as an extension module at first. But I still have
concern about how it will be accepted as a build-in. To what extent, it
is good enough to be accepted? And I think I need to do some effort to
accelerate the progress. Would you please give some suggestion?


--
--
Regards
Qiao Nuohan


--
Crash-utility mailing list
Crash-utility@redhat.com
https://www.redhat.com/mailman/listinfo/crash-utility
 
Old 04-12-2012, 01:43 PM
Dave Anderson
 
Default add a new command: ipcs

----- Original Message -----
> Hello Dave,
>
> At 2012-4-11 21:58, Dave Anderson wrote:
> > Then, if the command is eventually accepted as a built-in, it will
> > be a simple matter of removing the stuff shown above, changing
> > all the_OFFSET_() and_SIZE_() callers to OFFSET() and SIZE(), and
> > adding the members to the offset_table and size_table structures.
>
> I know I must post it as an extension module at first. But I still have
> concern about how it will be accepted as a build-in. To what extent, it
> is good enough to be accepted? And I think I need to do some effort to
> accelerate the progress. Would you please give some suggestion?

The acceptance criteria is purely subjective on my part, so there are
no set guidelines as to what is "good enough" to be accepted.

However, please consider that the last time a new command was added to
the crash utility was the "extend" command back in August 2001.

The extension module facility was added so that users could write their
own commands for whatever private purposes they were working on without
having to re-patch the base crash utility each time a new version came
out. And at the same time the base crash utility could remain simpler
and more maintainable, while affording the flexibility to those who are
inclined to write their own commands.

Dave







--
Crash-utility mailing list
Crash-utility@redhat.com
https://www.redhat.com/mailman/listinfo/crash-utility
 
Old 04-13-2012, 10:00 AM
qiaonuohan
 
Default add a new command: ipcs

Hello Dave,

The patch has been changed into an extension module. And the option and
its related output is changed as well. Please check.



--
--
Regards
Qiao Nuohan


--
Crash-utility mailing list
Crash-utility@redhat.com
https://www.redhat.com/mailman/listinfo/crash-utility
 
Old 04-13-2012, 09:00 PM
Dave Anderson
 
Default add a new command: ipcs

----- Original Message -----
> Hello Dave,
>
> The patch has been changed into an extension module. And the option and
> its related output is changed as well. Please check.
>


Thanks for doing that, as it makes it easier to build/test/debug.

This version looks to cover earlier 2.6 kernels, at a minimum
all RHEL4 2.6.9-era kernels now work OK. Thanks for making that
work as well.

I should note that I only have one 2.6.5-era SLES9 dumpfile -- but it
may be a hybrid that is patched by another company, and it requires a
System.map file, so it may be suspect. Anyway, it fails the shared
memory display because the (first) shmid_kernel address contains
bogus data:

crash> ipcs -m
------ Shared Memory Segments ------
SHM_KERNEL KEY SHMID UID PERMS BYTES NATTCH STATUS
ipcs: invalid kernel virtual address: 400108010 type: "file.f_dentry"
crash>

The semaphore command shows suspect data as well:

crash> ipcs -s
------ Semaphore Arrays --------
SEM_ARRAY KEY SEMID UID PERMS NSEMS
10302b127a0 0x00000000 1 0 2 62388694941697

crash>

And the message queue seems to work, but it doesn't show "(none allocated)":

crash> ipcs -q
------ Message Queues --------
MSG_QUEUE KEY MSQID UID PERMS USED-BYTES MESSAGES

crash>

In any case, because it's a somewhat strange/modified SLES9 kernel, I don't
want to say that the command doesn't work with that kernel version. If
any SUSE kernel users on this list can verify it, that would be helpful.
(just go the top-level directory of any crash source tree, copy the ipcs.c
file into the extensions sub-directory, enter "make extensions".)

Anyway, here are my comments with this version.

The basic "ipcs" output looks pretty good -- except for:

(1) Please change the "SHM_KERNEL" header string to "SHMID_KERNEL" so
that it reflects the name of the actual kernel data structure, i.e.,
the same way that "MSG_QUEUE" and "SEM_ARRAY" reflect the kernel's
"msg_queue" and "sem_array" structure names.

(2) Please remove the "0x" from the KEY columns. Continue to display
the key value with a zero-filled 4-byte (integer) hexadecimal value,
which is enough to make it obvious that it's not decimal.

However, I really don't like the "-u" option, either used alone
or in conjunction with "-i". It's confusing, redundant, and in most
cases pretty much useless.

For example:

crash> ipcs -s
------ Semaphore Status --------
emaphore Arrays --------
SEM_ARRAY KEY SEMID UID PERMS NSEMS
1007bc33d90 0x11016565 0 0 644 2
1007bc33b90 0x71014002 32769 0 666 1
100f3348990 0xc9e03647 163842 0 644 3

crash> ipcs -su
------ Semaphore Status --------
used arrays = 3
allocated semaphores = 6

crash>

Is the "-u" option really even necessary for semaphores? I can
easily count the number of arrays and count the NSEMS column if
I actually wanted to know how many allocated semaphores there are.
But who cares?

The same thing applies to the message queues:

crash> ipcs -m
------ Message Queues --------
MSG_QUEUE KEY MSQID UID PERMS USED-BYTES MESSAGES
107f8ef72d0 0x23064010 0 0 600 0 0
107f80113d0 0x5106000d 32769 0 700 1068 1
105f0c43e90 0x000004d2 98306 0 666 0 0

crash> ipcs -mu
------ Messages Status --------
allocated queues = 3
used headers = 1
used space = 1068 bytes

crash>

I can see the USED-BYTES column, or add the USED-BYTES, and can
obviously see the number of queues. But who cares?

So I see nothing gained by implementing "-u" for message queues or
semaphores. And when combined with "-i id", they are even more
useless.

However, perhaps there may be some useful extra information
associated with shared memory:

crash> ipcs -m
------ Shared Memory Segments ------
SHM_KERNEL KEY SHMID UID PERMS BYTES NATTCH STATUS
10007b88d90 0x01000007 688128 0 600 512000 2
10007df89d0 0x00000000 360449 0 600 393216 2 dest
10007bad7d0 0x00000000 131074 0 600 393216 2 dest
10007b832d0 0x00000000 163843 0 600 393216 2 dest
1000f6b1dd0 0x00000000 196612 0 600 393216 2 dest
10007baca90 0x00000000 229381 0 600 393216 2 dest
10007b831d0 0x00000000 262150 0 600 393216 2 dest
10007b874d0 0x00000000 294919 0 600 393216 2 dest
10007ba8690 0x00000000 327688 0 600 393216 2 dest
10007df87d0 0x00000000 393225 0 600 393216 2 dest
10007bacc90 0x00000000 425994 0 600 393216 2 dest
10007dfc6d0 0x00000000 458763 0 600 393216 2 dest
1000fbfb9d0 0x00000000 491532 0 600 393216 2 dest
10007b88b90 0x00000000 524301 0 600 393216 2 dest
10007b84990 0xf900c00c 720910 0 600 189 1

crash> ipcs -mu
----- Shared Memroy Status --------
segments allocated 15
pages allocatd 1374
pages resident 1106
pages swapped 994
swap performance attemts 0
swap performance successes 0

crash>

But I note that you insist on dumping the shared memory inode
somewhere, but it certainly looks out of place here:

crash> ipcs -mi 688128
SHMID: 688128
------ Shared Memroy Status --------
segments allocated 15
pages allocatd 125
pages resident 13
pages swapped 35
swap performance attemts 0
swap performance successes 0
vfs_inode 0x10010029da0
crash>

But if you had taken my suggestion, and followed the tradition
of "kmem -s" and "kmem -S" or "kmem -f" and "kmem -F", you could
dump the statistics of each shared memory segment after the
one-liner, i.e., like this:

crash> ipcs -m
SHM_KERNEL KEY SHMID UID PERMS BYTES NATTCH STATUS
ffff810036f67490 0x00000000 65536 3369 600 393216 2 dest
ffff810036f67390 0x00000000 98305 3369 600 393216 2 dest
ffff810036f67690 0x00000000 131074 3369 600 393216 2 dest
ffff810036f67190 0x00000000 163843 3369 600 393216 2 dest
ffff8100329184d0 0x00000000 196612 3369 600 393216 2 dest
ffff81003f01d790 0x00000000 229381 3369 600 393216 2 dest
crash>

where -M would do the same thing, but separate and follow each one-liner
above with the statistics associated with it:

crash> ipcs -M
SHM_KERNEL KEY SHMID UID PERMS BYTES NATTCH STATUS
ffff810036f67490 0x00000000 65536 3369 600 393216 2 dest
(display segment statistics here)

SHM_KERNEL KEY SHMID UID PERMS BYTES NATTCH STATUS
ffff810036f67390 0x00000000 98305 3369 600 393216 2 dest
(display segment statistics here)

SHM_KERNEL KEY SHMID UID PERMS BYTES NATTCH STATUS
ffff810036f67690 0x00000000 131074 3369 600 393216 2 dest
(display segment statistics here)

SHM_KERNEL KEY SHMID UID PERMS BYTES NATTCH STATUS
ffff810036f67190 0x00000000 163843 3369 600 393216 2 dest
(display segment statistics here)

SHM_KERNEL KEY SHMID UID PERMS BYTES NATTCH STATUS
ffff8100329184d0 0x00000000 196612 3369 600 393216 2 dest
(display segment statistics here)

SHM_KERNEL KEY SHMID UID PERMS BYTES NATTCH STATUS
ffff81003f01d790 0x00000000 229381 3369 600 393216 2 dest
(display segment statistics here)

(display cumulative shared memory statistics here)

crash>

But as I mentioned before, it's hard to conceive of a compelling reason to
have an additional "ipcs -S" or "ipcs -Q" options.

So you could simplify the invocation to allow these options:

ipcs [-s | q | -[mM] ] [-i id | unique-address-of-first-column]

And if by chance the "id" value is used by more than one facility,
then you could dump them both.

Thanks,
Dave

--
Crash-utility mailing list
Crash-utility@redhat.com
https://www.redhat.com/mailman/listinfo/crash-utility
 
Old 04-18-2012, 09:57 AM
qiaonuohan
 
Default add a new command: ipcs

Hello Dave,

I have changed the command according to your suggestion. But I haven't
got any SUSE at hand, so the problems with SLES9 may need a bit of time
to get fixed.


I send the mail together with "ipcs.c" file which hasn't fix the
problems of SLES9. Later, I will focus on it. Once the problems are
fixed, I will resend the code again.


--
--
Regards
Qiao Nuohan


--
Crash-utility mailing list
Crash-utility@redhat.com
https://www.redhat.com/mailman/listinfo/crash-utility
 
Old 04-18-2012, 07:33 PM
Dave Anderson
 
Default add a new command: ipcs

----- Original Message -----
> Hello Dave,
>
> I have changed the command according to your suggestion. But I
> haven't
> got any SUSE at hand, so the problems with SLES9 may need a bit of
> time
> to get fixed.
>
> I send the mail together with "ipcs.c" file which hasn't fix the
> problems of SLES9. Later, I will focus on it. Once the problems are
> fixed, I will resend the code again.
>

This is looking pretty good...

A few comments:

This confusion is unnecessary:

crash> ipcs ffff81042fc7d9d0
ipcs: specified one of -s -m -M -q together with -i option/addr
Usage:
ipcs [-smMq] [-i id | addr]
Enter "help ipcs" for details.
crash>

When using a shmid_kernel, msg_queue or sem_array address, why force
the user to also enter -s, -m, -M, or -q? Those addresses are guaranteed
to be unique values.

And for that matter, you could drop the "-i" entirely. There may be
duplicate id values among the 3 facilities, but if that's the case,
you could just display all of them. That way, it could be simplified
to this:

Usage: ipcs [-smMq] [id | address]

And you could also allow multiple id and/or address values to be entered,
as is done by most of the other crash commands.

And a few other aesthetic issues...

The "------ Shared Memory Segments ------", "------ Semaphore Arrays --------"
and "------ Message Queues --------" lines are unnecessary. It's obvious
what each section is showing, because the items in the headers self-identify
what facility is being dumped.

The semaphore and message queue KEY values still have "0x":

SEM_ARRAY KEY SEMID UID PERMS NSEMS
ffff88003de05610 0x00000000 0 0 600 1
ffff88003775f250 0x00000000 98305 0 600 1

MSG_QUEUE KEY MSQID UID PERMS USED-BYTES MESSAGES
ffff81042fc7d9d0 0x51008003 0 0 700 0 0
ffff81062f9f0790 0xffffffff 32769 0 600 0 0
ffff81062fe1d0d0 0x000004d2 98306 0 666 0 0

The ipcs -M output could be condensed a bit, from this:

SHMID_KERNEL KEY SHMID UID PERMS BYTES NATTCH STATUS
ffff81062f9f0d90 740004b7 2031616 0 600 4 0 dest
PAGES ALLOCATED 1
PAGES RESIDENT 0
PAGES SWAPPED 1
SWAP PERFORMANCE ATTEMPTS 0
SWAP PERFORMANCE SUCCESSES 0
vfs_inode 0xffff81062f5ca158

to something like this:

SHMID_KERNEL KEY SHMID UID PERMS BYTES NATTCH STATUS
ffff81062f9f0d90 740004b7 2031616 0 600 4 0 dest
PAGES ALLOCATED: 1 RESIDENT: 0 SWAPPED: 1
SWAP ATTEMPTS: 0 SUCCESSES: 0
VFS_INODE: ffff81062f5ca158

Other than those few items, I like it...

Thanks,
Dave



--
Crash-utility mailing list
Crash-utility@redhat.com
https://www.redhat.com/mailman/listinfo/crash-utility
 

Thread Tools




All times are GMT. The time now is 10:29 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org