FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Redhat > Crash Utility

 
 
LinkBack Thread Tools
 
Old 08-30-2012, 02:16 AM
Zhang Yanfei
 
Default Fix bugs in runq

于 2012年08月30日 02:30, Dave Anderson 写道:
>
>
> ----- Original Message -----
>
>>> Another question re: your patch -- is it possible to have a "depth" greater
>>> than 1?
>>
>> Yes, "depth" could be greater than 1, see the example below:
>>
>> CPU 0 RUNQUEUE: ffff880028216680
>> CURRENT: PID: 17085 TASK: ffff880137c63540 COMMAND: "bash"
>> RT PRIO_ARRAY: ffff880028216808 <-- depth = 0
>> [ 0] PID: 17129 TASK: ffff880037aeaaa0 COMMAND: "rtloop99"
>> PID: 2832 TASK: ffff88013b09cae0 COMMAND: "rtkit-daemon"
>> PID: 6 TASK: ffff88013d7c6080 COMMAND: "watchdog/0"
>> [ 1] GROUP RT PRIO_ARRAY: ffff88002ca65000 <-- depth = 1
>> [ 1] GROUP RT PRIO_ARRAY: ffff880015821000 <-- depth = 2
>> [ 1] PID: 17126 TASK: ffff880135d2a040 COMMAND: "rtloop98"
>> [ 98] PID: 17119 TASK: ffff88010190d500 COMMAND: "rtloop1"
>> PID: 17121 TASK: ffff88013bd27500 COMMAND: "rtloop1"
>> PID: 17120 TASK: ffff88010190caa0 COMMAND: "rtloop1"
>> CFS RB_ROOT: ffff880028216718
> ...
>
>> Hmm, I think the depth could not be that big. So how do you think this
>> kind of output.
>>
>> The attached patch just changed "CHILD" to "GROUP".
>
> Interesting -- how did you set up the depth-greater-than-one scenario?

Below is my script to run a number of rt tasks in different cgroups:

#!/bin/bash

mkdir /cgroup/cpu/test1
echo 850000 > /cgroup/cpu/test1/cpu.rt_runtime_us

./rtloop1 &
echo $! > /cgroup/cpu/test1/tasks
./rtloop1 &
echo $! > /cgroup/cpu/test1/tasks
./rtloop1 &
echo $! > /cgroup/cpu/test1/tasks
./rtloop98 &
echo $! > /cgroup/cpu/test1/tasks
./rtloop45 &
echo $! > /cgroup/cpu/test1/tasks
./rtloop99 &
echo $! > /cgroup/cpu/test1/tasks

mkdir /cgroup/cpu/test1/test11
echo 550000 > /cgroup/cpu/test1/test11/cpu.rt_runtime_us

./rtloop98 &
echo $! > /cgroup/cpu/test1/test11/tasks
./rtloop99 &
echo $! > /cgroup/cpu/test1/test11/tasks

./rtloop98 &
./rtloop99 &

>
> Anyway, given that it is possible, let's at least tighten up the output display
> by changing each "9 * depth" usage to be "6 * depth". That should alter
> your example output to look like this:
>
> CPU 0 RUNQUEUE: ffff880028216680
> CURRENT: PID: 17085 TASK: ffff880137c63540 COMMAND: "bash"
> RT PRIO_ARRAY: ffff880028216808
> [ 0] PID: 17129 TASK: ffff880037aeaaa0 COMMAND: "rtloop99"
> PID: 2832 TASK: ffff88013b09cae0 COMMAND: "rtkit-daemon"
> PID: 6 TASK: ffff88013d7c6080 COMMAND: "watchdog/0"
> [ 1] GROUP RT PRIO_ARRAY: ffff88002ca65000
> [ 1] GROUP RT PRIO_ARRAY: ffff880015821000
> [ 1] PID: 17126 TASK: ffff880135d2a040 COMMAND: "rtloop98"
> [ 98] PID: 17119 TASK: ffff88010190d500 COMMAND: "rtloop1"
> PID: 17121 TASK: ffff88013bd27500 COMMAND: "rtloop1"
> PID: 17120 TASK: ffff88010190caa0 COMMAND: "rtloop1"
> CFS RB_ROOT: ffff880028216718
> ...

Hmm, This kind of output looks not that understandable easily...
for example:
RT PRIO_ARRAY: ffff880028296808
[ 0] GROUP RT PRIO_ARRAY: ffff880103ded800
[ 0] GROUP RT PRIO_ARRAY: ffff88011ae70800
[ 0] PID: 17127 TASK: ffff8800378f6040 COMMAND: "rtloop99"
PID: 17124 TASK: ffff8800a9592ae0 COMMAND: "rtloop99"
[ 1] PID: 17122 TASK: ffff88011aec3500 COMMAND: "rtloop98"
[ 54] PID: 17123 TASK: ffff88013b414ae0 COMMAND: "rtloop45"
PID: 10 TASK: ffff88013cc2cae0 COMMAND: "watchdog/1"
PID: 7 TASK: ffff88013d7ef500 COMMAND: "migration/1"
[ 1] PID: 17128 TASK: ffff880139761540 COMMAND: "rtloop98"

This output looks confusing, it seems that task 10 and 7 are in group
ffff880103ded800

I think we can avoid this by two ways:
1. 6 * depth --> 8 * depth, proper indentation.
RT PRIO_ARRAY: ffff880028296808
[ 0] GROUP RT PRIO_ARRAY: ffff880103ded800
[ 0] GROUP RT PRIO_ARRAY: ffff88011ae70800
[ 0] PID: 17127 TASK: ffff8800378f6040 COMMAND: "rtloop99"
PID: 17124 TASK: ffff8800a9592ae0 COMMAND: "rtloop99"
[ 1] PID: 17122 TASK: ffff88011aec3500 COMMAND: "rtloop98"
[ 54] PID: 17123 TASK: ffff88013b414ae0 COMMAND: "rtloop45"
PID: 10 TASK: ffff88013cc2cae0 COMMAND: "watchdog/1"
PID: 7 TASK: ffff88013d7ef500 COMMAND: "migration/1"
[ 1] PID: 17128 TASK: ffff880139761540 COMMAND: "rtloop98"

2. print priority for all tasks,
RT PRIO_ARRAY: ffff880028296808
[ 0] GROUP RT PRIO_ARRAY: ffff880103ded800
[ 0] GROUP RT PRIO_ARRAY: ffff88011ae70800
[ 0] PID: 17127 TASK: ffff8800378f6040 COMMAND: "rtloop99"
[ 0] PID: 17124 TASK: ffff8800a9592ae0 COMMAND: "rtloop99"
[ 1] PID: 17122 TASK: ffff88011aec3500 COMMAND: "rtloop98"
[ 54] PID: 17123 TASK: ffff88013b414ae0 COMMAND: "rtloop45"
[ 0] PID: 10 TASK: ffff88013cc2cae0 COMMAND: "watchdog/1"
[ 0] PID: 7 TASK: ffff88013d7ef500 COMMAND: "migration/1"
[ 1] PID: 17128 TASK: ffff880139761540 COMMAND: "rtloop98"

I prefer the second one, how do you think?
And I attached 2 patches, one is for way 1, and the other is for way 2.

>
> And also, I'd prefer to not create the dangling "static int depth",
> but rather to add a depth argument to dump_RT_prio_array(), where
> dump_CFS_runqueues() passes a 0, and dump_RT_prio_array() passes
> "depth+1" to itself:
>
> static void
> dump_RT_prio_array(int depth, ulong k_prio_array, char *u_prio_array)
> {

......

> }
>
> Can you verify that those changes work for you?

OK.

Thanks
Zhang Yanfei

>From c1330d466e459a680a967f374939d91794a940e7 Mon Sep 17 00:00:00 2001
From: zhangyanfei <zhangyanfei@cn.fujitsu.com>
Date: Thu, 30 Aug 2012 10:12:10 +0800
Subject: [PATCH] Fix rt not support group sched bug

Signed-off-by: zhangyanfei <zhangyanfei@cn.fujitsu.com>
---
defs.h | 2 ++
symbols.c | 4 ++++
task.c | 52 ++++++++++++++++++++++++++++++++++++++++------------
3 files changed, 46 insertions(+), 12 deletions(-)

diff --git a/defs.h b/defs.h
index 4a8e2e3..4af670d 100755
--- a/defs.h
+++ b/defs.h
@@ -1785,6 +1785,7 @@ struct offset_table { /* stash of commonly-used offsets */
long log_level;
long log_flags_level;
long timekeeper_xtime_sec;
+ long sched_rt_entity_my_q;
};

struct size_table { /* stash of commonly-used sizes */
@@ -1919,6 +1920,7 @@ struct size_table { /* stash of commonly-used sizes */
long msg_queue;
long log;
long log_level;
+ long rt_rq;
};

struct array_table {
diff --git a/symbols.c b/symbols.c
index 2646ff8..bbadd5e 100755
--- a/symbols.c
+++ b/symbols.c
@@ -8812,6 +8812,8 @@ dump_offset_table(char *spec, ulong makestruct)
OFFSET(log_level));
fprintf(fp, " log_flags_level: %ld
",
OFFSET(log_flags_level));
+ fprintf(fp, " sched_rt_entity_my_q: %ld
",
+ OFFSET(sched_rt_entity_my_q));

fprintf(fp, "
size_table:
");
fprintf(fp, " page: %ld
", SIZE(page));
@@ -9027,6 +9029,8 @@ dump_offset_table(char *spec, ulong makestruct)
SIZE(log));
fprintf(fp, " log_level: %ld
",
SIZE(log_level));
+ fprintf(fp, " rt_rq: %ld
",
+ SIZE(rt_rq));

fprintf(fp, "
array_table:
");
/*
diff --git a/task.c b/task.c
index 6e4cfec..824058c 100755
--- a/task.c
+++ b/task.c
@@ -67,7 +67,7 @@ static void dump_task_runq_entry(struct task_context *);
static int dump_tasks_in_cfs_rq(ulong);
static void dump_on_rq_tasks(void);
static void dump_CFS_runqueues(void);
-static void dump_RT_prio_array(ulong, char *);
+static void dump_RT_prio_array(int, ulong, char *);
static void task_struct_member(struct task_context *,unsigned int, struct reference *);
static void signal_reference(struct task_context *, ulong, struct reference *);
static void do_sig_thread_group(ulong);
@@ -7552,6 +7552,7 @@ dump_CFS_runqueues(void)

if (!VALID_STRUCT(cfs_rq)) {
STRUCT_SIZE_INIT(cfs_rq, "cfs_rq");
+ STRUCT_SIZE_INIT(rt_rq, "rt_rq");
MEMBER_OFFSET_INIT(rq_rt, "rq", "rt");
MEMBER_OFFSET_INIT(rq_nr_running, "rq", "nr_running");
MEMBER_OFFSET_INIT(task_struct_se, "task_struct", "se");
@@ -7562,6 +7563,8 @@ dump_CFS_runqueues(void)
"cfs_rq");
MEMBER_OFFSET_INIT(sched_entity_my_q, "sched_entity",
"my_q");
+ MEMBER_OFFSET_INIT(sched_rt_entity_my_q, "sched_rt_entity",
+ "my_q");
MEMBER_OFFSET_INIT(sched_entity_on_rq, "sched_entity", "on_rq");
MEMBER_OFFSET_INIT(cfs_rq_rb_leftmost, "cfs_rq", "rb_leftmost");
MEMBER_OFFSET_INIT(cfs_rq_nr_running, "cfs_rq", "nr_running");
@@ -7629,7 +7632,7 @@ dump_CFS_runqueues(void)
OFFSET(cfs_rq_tasks_timeline));
}

- dump_RT_prio_array(runq + OFFSET(rq_rt) + OFFSET(rt_rq_active),
+ dump_RT_prio_array(0, runq + OFFSET(rq_rt) + OFFSET(rt_rq_active),
&runqbuf[OFFSET(rq_rt) + OFFSET(rt_rq_active)]);

fprintf(fp, " CFS RB_ROOT: %lx
", (ulong)root);
@@ -7649,7 +7652,7 @@ dump_CFS_runqueues(void)
}

static void
-dump_RT_prio_array(ulong k_prio_array, char *u_prio_array)
+dump_RT_prio_array(int depth, ulong k_prio_array, char *u_prio_array)
{
int i, c, tot, cnt, qheads;
ulong offset, kvaddr, uvaddr;
@@ -7657,8 +7660,11 @@ dump_RT_prio_array(ulong k_prio_array, char *u_prio_array)
struct list_data list_data, *ld;
struct task_context *tc;
ulong *tlist;
+ ulong my_q, task_addr;
+ char *rt_rq_buf;

- fprintf(fp, " RT PRIO_ARRAY: %lx
", k_prio_array);
+ if (!depth)
+ fprintf(fp, " RT PRIO_ARRAY: %lx
", k_prio_array);

qheads = (i = ARRAY_LENGTH(rt_prio_array_queue)) ?
i : get_array_length("rt_prio_array.queue", NULL, SIZE(list_head));
@@ -7678,14 +7684,11 @@ dump_RT_prio_array(ulong k_prio_array, char *u_prio_array)
if ((list_head[0] == kvaddr) && (list_head[1] == kvaddr))
continue;

- fprintf(fp, " [%3d] ", i);
-
BZERO(ld, sizeof(struct list_data));
ld->start = list_head[0];
if (VALID_MEMBER(task_struct_rt) &&
VALID_MEMBER(sched_rt_entity_run_list))
- ld->list_head_offset = OFFSET(task_struct_rt) +
- OFFSET(sched_rt_entity_run_list);
+ ld->list_head_offset = OFFSET(sched_rt_entity_run_list);
else
ld->list_head_offset = OFFSET(task_struct_run_list);
ld->end = kvaddr;
@@ -7695,10 +7698,35 @@ dump_RT_prio_array(ulong k_prio_array, char *u_prio_array)
tlist = (ulong *)GETBUF((cnt) * sizeof(ulong));
cnt = retrieve_list(tlist, cnt);
for (c = 0; c < cnt; c++) {
- if (!(tc = task_to_context(tlist[c])))
+ task_addr = tlist[c];
+ if (VALID_MEMBER(sched_rt_entity_my_q)) {
+ readmem(tlist[c] + OFFSET(sched_rt_entity_my_q),
+ KVADDR, &my_q, sizeof(ulong), "my_q",
+ FAULT_ON_ERROR);
+ if (my_q) {
+ rt_rq_buf = GETBUF(SIZE(rt_rq));
+ readmem(my_q, KVADDR, rt_rq_buf,
+ SIZE(rt_rq), "rt_rq",
+ FAULT_ON_ERROR);
+
+ INDENT(5 + 6 * depth);
+ fprintf(fp, "[%3d] ", i);
+ fprintf(fp, "GROUP RT PRIO_ARRAY: %lx
",
+ my_q + OFFSET(rt_rq_active));
+ tot++;
+ dump_RT_prio_array(depth + 1,
+ my_q + OFFSET(rt_rq_active),
+ &rt_rq_buf[OFFSET(rt_rq_active)]);
+ continue;
+ } else {
+ task_addr -= OFFSET(task_struct_rt);
+ }
+ }
+ if (!(tc = task_to_context(task_addr)))
continue;
- if (c)
- INDENT(11);
+
+ INDENT(5 + 6 * depth);
+ fprintf(fp, "[%3d] ", i);
fprintf(fp, "PID: %-5ld TASK: %lx COMMAND: "%s"
",
tc->pid, tc->task, tc->comm);
tot++;
@@ -7707,7 +7735,7 @@ dump_RT_prio_array(ulong k_prio_array, char *u_prio_array)
}

if (!tot) {
- INDENT(5);
+ INDENT(5 + 9 * depth);
fprintf(fp, "[no tasks queued]
");
}
}
--
1.7.1

>From 6df681dd0fb65615547602f24fa39823053a8376 Mon Sep 17 00:00:00 2001
From: zhangyanfei <zhangyanfei@cn.fujitsu.com>
Date: Thu, 30 Aug 2012 10:01:36 +0800
Subject: [PATCH] Fix rt not support group sched bug

Signed-off-by: zhangyanfei <zhangyanfei@cn.fujitsu.com>
---
defs.h | 2 ++
symbols.c | 4 ++++
task.c | 49 +++++++++++++++++++++++++++++++++++++++----------
3 files changed, 45 insertions(+), 10 deletions(-)

diff --git a/defs.h b/defs.h
index 4a8e2e3..4af670d 100755
--- a/defs.h
+++ b/defs.h
@@ -1785,6 +1785,7 @@ struct offset_table { /* stash of commonly-used offsets */
long log_level;
long log_flags_level;
long timekeeper_xtime_sec;
+ long sched_rt_entity_my_q;
};

struct size_table { /* stash of commonly-used sizes */
@@ -1919,6 +1920,7 @@ struct size_table { /* stash of commonly-used sizes */
long msg_queue;
long log;
long log_level;
+ long rt_rq;
};

struct array_table {
diff --git a/symbols.c b/symbols.c
index 2646ff8..bbadd5e 100755
--- a/symbols.c
+++ b/symbols.c
@@ -8812,6 +8812,8 @@ dump_offset_table(char *spec, ulong makestruct)
OFFSET(log_level));
fprintf(fp, " log_flags_level: %ld
",
OFFSET(log_flags_level));
+ fprintf(fp, " sched_rt_entity_my_q: %ld
",
+ OFFSET(sched_rt_entity_my_q));

fprintf(fp, "
size_table:
");
fprintf(fp, " page: %ld
", SIZE(page));
@@ -9027,6 +9029,8 @@ dump_offset_table(char *spec, ulong makestruct)
SIZE(log));
fprintf(fp, " log_level: %ld
",
SIZE(log_level));
+ fprintf(fp, " rt_rq: %ld
",
+ SIZE(rt_rq));

fprintf(fp, "
array_table:
");
/*
diff --git a/task.c b/task.c
index 6e4cfec..5b04e99 100755
--- a/task.c
+++ b/task.c
@@ -67,7 +67,7 @@ static void dump_task_runq_entry(struct task_context *);
static int dump_tasks_in_cfs_rq(ulong);
static void dump_on_rq_tasks(void);
static void dump_CFS_runqueues(void);
-static void dump_RT_prio_array(ulong, char *);
+static void dump_RT_prio_array(int, ulong, char *);
static void task_struct_member(struct task_context *,unsigned int, struct reference *);
static void signal_reference(struct task_context *, ulong, struct reference *);
static void do_sig_thread_group(ulong);
@@ -7552,6 +7552,7 @@ dump_CFS_runqueues(void)

if (!VALID_STRUCT(cfs_rq)) {
STRUCT_SIZE_INIT(cfs_rq, "cfs_rq");
+ STRUCT_SIZE_INIT(rt_rq, "rt_rq");
MEMBER_OFFSET_INIT(rq_rt, "rq", "rt");
MEMBER_OFFSET_INIT(rq_nr_running, "rq", "nr_running");
MEMBER_OFFSET_INIT(task_struct_se, "task_struct", "se");
@@ -7562,6 +7563,8 @@ dump_CFS_runqueues(void)
"cfs_rq");
MEMBER_OFFSET_INIT(sched_entity_my_q, "sched_entity",
"my_q");
+ MEMBER_OFFSET_INIT(sched_rt_entity_my_q, "sched_rt_entity",
+ "my_q");
MEMBER_OFFSET_INIT(sched_entity_on_rq, "sched_entity", "on_rq");
MEMBER_OFFSET_INIT(cfs_rq_rb_leftmost, "cfs_rq", "rb_leftmost");
MEMBER_OFFSET_INIT(cfs_rq_nr_running, "cfs_rq", "nr_running");
@@ -7629,7 +7632,7 @@ dump_CFS_runqueues(void)
OFFSET(cfs_rq_tasks_timeline));
}

- dump_RT_prio_array(runq + OFFSET(rq_rt) + OFFSET(rt_rq_active),
+ dump_RT_prio_array(0, runq + OFFSET(rq_rt) + OFFSET(rt_rq_active),
&runqbuf[OFFSET(rq_rt) + OFFSET(rt_rq_active)]);

fprintf(fp, " CFS RB_ROOT: %lx
", (ulong)root);
@@ -7649,7 +7652,7 @@ dump_CFS_runqueues(void)
}

static void
-dump_RT_prio_array(ulong k_prio_array, char *u_prio_array)
+dump_RT_prio_array(int depth, ulong k_prio_array, char *u_prio_array)
{
int i, c, tot, cnt, qheads;
ulong offset, kvaddr, uvaddr;
@@ -7657,8 +7660,11 @@ dump_RT_prio_array(ulong k_prio_array, char *u_prio_array)
struct list_data list_data, *ld;
struct task_context *tc;
ulong *tlist;
+ ulong my_q, task_addr;
+ char *rt_rq_buf;

- fprintf(fp, " RT PRIO_ARRAY: %lx
", k_prio_array);
+ if (!depth)
+ fprintf(fp, " RT PRIO_ARRAY: %lx
", k_prio_array);

qheads = (i = ARRAY_LENGTH(rt_prio_array_queue)) ?
i : get_array_length("rt_prio_array.queue", NULL, SIZE(list_head));
@@ -7678,14 +7684,14 @@ dump_RT_prio_array(ulong k_prio_array, char *u_prio_array)
if ((list_head[0] == kvaddr) && (list_head[1] == kvaddr))
continue;

- fprintf(fp, " [%3d] ", i);
+ INDENT(5 + 8 * depth);
+ fprintf(fp, "[%3d] ", i);

BZERO(ld, sizeof(struct list_data));
ld->start = list_head[0];
if (VALID_MEMBER(task_struct_rt) &&
VALID_MEMBER(sched_rt_entity_run_list))
- ld->list_head_offset = OFFSET(task_struct_rt) +
- OFFSET(sched_rt_entity_run_list);
+ ld->list_head_offset = OFFSET(sched_rt_entity_run_list);
else
ld->list_head_offset = OFFSET(task_struct_run_list);
ld->end = kvaddr;
@@ -7695,10 +7701,33 @@ dump_RT_prio_array(ulong k_prio_array, char *u_prio_array)
tlist = (ulong *)GETBUF((cnt) * sizeof(ulong));
cnt = retrieve_list(tlist, cnt);
for (c = 0; c < cnt; c++) {
- if (!(tc = task_to_context(tlist[c])))
+ task_addr = tlist[c];
+ if (VALID_MEMBER(sched_rt_entity_my_q)) {
+ readmem(tlist[c] + OFFSET(sched_rt_entity_my_q),
+ KVADDR, &my_q, sizeof(ulong), "my_q",
+ FAULT_ON_ERROR);
+ if (my_q) {
+ rt_rq_buf = GETBUF(SIZE(rt_rq));
+ readmem(my_q, KVADDR, rt_rq_buf,
+ SIZE(rt_rq), "rt_rq",
+ FAULT_ON_ERROR);
+ if (c)
+ INDENT(11 + 8 * depth);
+ fprintf(fp, "GROUP RT PRIO_ARRAY: %lx
",
+ my_q + OFFSET(rt_rq_active));
+ tot++;
+ dump_RT_prio_array(depth + 1,
+ my_q + OFFSET(rt_rq_active),
+ &rt_rq_buf[OFFSET(rt_rq_active)]);
+ continue;
+ } else {
+ task_addr -= OFFSET(task_struct_rt);
+ }
+ }
+ if (!(tc = task_to_context(task_addr)))
continue;
if (c)
- INDENT(11);
+ INDENT(11 + 8 * depth);
fprintf(fp, "PID: %-5ld TASK: %lx COMMAND: "%s"
",
tc->pid, tc->task, tc->comm);
tot++;
@@ -7707,7 +7736,7 @@ dump_RT_prio_array(ulong k_prio_array, char *u_prio_array)
}

if (!tot) {
- INDENT(5);
+ INDENT(5 + 8 * depth);
fprintf(fp, "[no tasks queued]
");
}
}
--
1.7.1

--
Crash-utility mailing list
Crash-utility@redhat.com
https://www.redhat.com/mailman/listinfo/crash-utility
 
Old 08-30-2012, 02:18 PM
Dave Anderson
 
Default Fix bugs in runq

----- Original Message -----

> 2. print priority for all tasks,
> RT PRIO_ARRAY: ffff880028296808
> [ 0] GROUP RT PRIO_ARRAY: ffff880103ded800
> [ 0] GROUP RT PRIO_ARRAY: ffff88011ae70800
> [ 0] PID: 17127 TASK: ffff8800378f6040 COMMAND: "rtloop99"
> [ 0] PID: 17124 TASK: ffff8800a9592ae0 COMMAND: "rtloop99"
> [ 1] PID: 17122 TASK: ffff88011aec3500 COMMAND: "rtloop98"
> [ 54] PID: 17123 TASK: ffff88013b414ae0 COMMAND: "rtloop45"
> [ 0] PID: 10 TASK: ffff88013cc2cae0 COMMAND: "watchdog/1"
> [ 0] PID: 7 TASK: ffff88013d7ef500 COMMAND: "migration/1"
> [ 1] PID: 17128 TASK: ffff880139761540 COMMAND: "rtloop98"
>
> I prefer the second one, how do you think?
> And I attached 2 patches, one is for way 1, and the other is for way 2.

Agreed -- I like patch #2 -- it is queued for crash-6.1.0.

Thanks Zhang,
Dave


--
Crash-utility mailing list
Crash-utility@redhat.com
https://www.redhat.com/mailman/listinfo/crash-utility
 

Thread Tools




All times are GMT. The time now is 01:26 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org