FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Redhat > Crash Utility

 
 
LinkBack Thread Tools
 
Old 01-01-1970, 01:00 AM
HATAYAMA Daisuke
 
Default gcore extension module: user-mode process core dump

gcore extension module provides a means to create ELF core dump for
user-mode process that is contained within crash kernel dump. I design
this to behave as kernel's ELF core dumper.

For previous discussion, see:
https://www.redhat.com/archives/crash-utility/2010-August/msg00001.html

Compared with the previous version, this release:
- supports more kernel versions, and
- collects register values more accurately (but still not perfect).

Support Range
=============

|----------------+----------------------------------------------|
| ARCH | X86, X86_64 |
|----------------+----------------------------------------------|
| Kernel Version | RHEL4.8, RHEL5.5, RHEL6.0 and Vanilla 2.6.36 |
|----------------+----------------------------------------------|

TODO
====

I have still remaining tasks to do:
- Improvement on register collection for active tasks
- Improvement on callee-saved register collection on x86_64
- Support core dump for tasks running in x86_32 compatibility mode

Usage
=====

1) Expand source files under extensions directory.

Arrange the attached source files as shown below:

./extensions/gcore.c
./extensions/gcore.mk
./extensions/libgcore/gcore_coredump.c
./extensions/libgcore/gcore_coredump_table.c
./extensions/libgcore/gcore_defs.h
./extensions/libgcore/gcore_dumpfilter.c
./extensions/libgcore/gcore_global_data.c
./extensions/libgcore/gcore_regset.c
./extensions/libgcore/gcore_verbose.c
./extensions/libgcore/gcore_x86.c

2) Type ``make extensions'; then, ``gcore.so' is generated under
extensions directory.

3) Type ``extend gcore.so' to load gcore extension module.

Look at help message for actual usage: I attach the help message at
the end of this mail.

4) Type ``extend -u gcore.so' to unload gcore extension module.

Help Message
============

NAME
gcore - gcore - retrieve a process image as a core dump

SYNOPSIS
gcore
gcore [-v vlevel] [-f filter] [pid | taskp]*
This command retrieves a process image as a core dump.

DESCRIPTION

-v Display verbose information according to vlevel:

progress library error page fault
---------------------------------------
0
1 x
2 x
4 x (default)
7 x x x

-f Specify kinds of memory to be written into core dumps according to
the filter flag in bitwise:

AP AS FP FS ELF HP HS
------------------------------
0
1 x
2 x
4 x
8 x
16 x x
32 x
64 x
127 x x x x x x x

AP Anonymous Private Memory
AS Anonymous Shared Memory
FP File-Backed Private Memory
FS File-Backed Shared Memory
ELF ELF header pages in file-backed private memory areas
HP Hugetlb Private Memory
HS Hugetlb Shared Memory

If no pid or taskp is specified, gcore tries to retrieve the process image
of the current task context.

The file name of a generated core dump is core.<pid> where pid is PID of
the specified process.

For a multi-thread process, gcore generates a core dump containing
information for all threads, which is similar to a behaviour of the ELF
core dumper in Linux kernel.

Notice the difference of PID on between crash and linux that ps command in
crash utility displays LWP, while ps command in Linux thread group tid,
precisely PID of the thread group leader.

gcore provides core dump filtering facility to allow users to select what
kinds of memory maps to be included in the resulting core dump. There are
7 kinds memory maps in total, and you can set it up with set command.
For more detailed information, please see a help command message.

EXAMPLES
Specify the process you want to retrieve as a core dump. Here assume the
process with PID 12345.

crash> gcore 12345
Saved core.12345
crash>

Next, specify by TASK. Here assume the process placing at the address
f9d7000 with PID 32323.

crash> gcore f9d78000
Saved core.32323
crash>

If multiple arguments are given, gcore performs dumping process in the
order the arguments are given.

crash> gcore 5217 ffff880136d72040 23299 24459 ffff880136420040
Saved core.5217
Saved core.1130
Saved core.1130
Saved core.24459
Saved core.30102
crash>

If no argument is given, gcore tries to retrieve the process of the current
task context.

crash> set
PID: 54321
COMMAND: "bash"
TASK: e0000040f80c0000
CPU: 0
STATE: TASK_INTERRUPTIBLE
crash> gcore
Saved core.54321

When a multi-thread process is specified, the generated core file name has
the thread leader's PID; here it is assumed to be 12340.

crash> gcore 12345
Saved core.12340

It is not allowed to specify two same options at the same time.

crash> gcore -v 1 1234 -v 1
Usage: gcore
gcore [-v vlevel] [-f filter] [pid | taskp]*
gcore -d
Enter "help gcore" for details.

It is allowed to specify -v and -f options in a different order.

crash> gcore -v 2 5201 -f 21 ffff880126ff9520 5205
Saved core.5174
Saved core.5217
Saved core.5167
crash> gcore 5201 ffff880126ff9520 -f 21 5205 -v 2
Saved core.5174
Saved core.5217
Saved core.5167

Signed-off-by: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
/* gcore.c -- core analysis suite
*
* Copyright (C) 2010 FUJITSU LIMITED
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/

#include "defs.h"
#include <gcore_defs.h>
#include <stdint.h>
#include <elf.h>

static void gcore_offset_table_init(void);
static void gcore_size_table_init(void);

static void do_gcore(char *arg);
static void do_setup_gcore(struct task_context *tc);
static void do_clean_gcore(void);

static struct command_table_entry command_table[] = {
{ "gcore", cmd_gcore, help_gcore, 0 },
#ifdef GCORE_TEST
{ "gcore_test", cmd_gcore_test, help_gcore_test, 0 },
#endif
{ (char *)NULL }
};

int
_init(void) /* Register the command set. */
{
gcore_offset_table_init();
gcore_size_table_init();
gcore_coredump_table_init();
gcore_arch_table_init();
gcore_arch_regsets_init();
register_extension(command_table);
return 1;
}

int
_fini(void)
{
return 1;
}

char *help_gcore[] = {
"gcore",
"gcore - retrieve a process image as a core dump",
"
"
" gcore [-v vlevel] [-f filter] [pid | taskp]*
"
" This command retrieves a process image as a core dump.",
" ",
" -v Display verbose information according to vlevel:",
" ",
" progress library error page fault",
" ---------------------------------------",
" 0",
" 1 x",
" 2 x",
" 4 x (default)",
" 7 x x x",
" ",
" -f Specify kinds of memory to be written into core dumps according to",
" the filter flag in bitwise:",
" ",
" AP AS FP FS ELF HP HS",
" ------------------------------",
" 0",
" 1 x",
" 2 x",
" 4 x",
" 8 x",
" 16 x x",
" 32 x",
" 64 x",
" 127 x x x x x x x",
" ",
" AP Anonymous Private Memory",
" AS Anonymous Shared Memory",
" FP File-Backed Private Memory",
" FS File-Backed Shared Memory",
" ELF ELF header pages in file-backed private memory areas",
" HP Hugetlb Private Memory",
" HS Hugetlb Shared Memory",
" ",
" If no pid or taskp is specified, gcore tries to retrieve the process image",
" of the current task context.",
" ",
" The file name of a generated core dump is core.<pid> where pid is PID of",
" the specified process.",
" ",
" For a multi-thread process, gcore generates a core dump containing",
" information for all threads, which is similar to a behaviour of the ELF",
" core dumper in Linux kernel.",
" ",
" Notice the difference of PID on between crash and linux that ps command in",
" crash utility displays LWP, while ps command in Linux thread group tid,",
" precisely PID of the thread group leader.",
" ",
" gcore provides core dump filtering facility to allow users to select what",
" kinds of memory maps to be included in the resulting core dump. There are",
" 7 kinds memory maps in total, and you can set it up with set command.",
" For more detailed information, please see a help command message.",
" ",
"EXAMPLES",
" Specify the process you want to retrieve as a core dump. Here assume the",
" process with PID 12345.",
" ",
" crash> gcore 12345",
" Saved core.12345",
" crash>",
" ",
" Next, specify by TASK. Here assume the process placing at the address",
" f9d7000 with PID 32323.",
" ",
" crash> gcore f9d78000",
" Saved core.32323",
" crash>",
" ",
" If multiple arguments are given, gcore performs dumping process in the",
" order the arguments are given.",
" ",
" crash> gcore 5217 ffff880136d72040 23299 24459 ffff880136420040",
" Saved core.5217",
" Saved core.1130",
" Saved core.1130",
" Saved core.24459",
" Saved core.30102",
" crash>",
" ",
" If no argument is given, gcore tries to retrieve the process of the current",
" task context.",
" ",
" crash> set",
" PID: 54321",
" COMMAND: "bash"",
" TASK: e0000040f80c0000",
" CPU: 0",
" STATE: TASK_INTERRUPTIBLE",
" crash> gcore",
" Saved core.54321",
" ",
" When a multi-thread process is specified, the generated core file name has",
" the thread leader's PID; here it is assumed to be 12340.",
" ",
" crash> gcore 12345",
" Saved core.12340",
" ",
" It is not allowed to specify two same options at the same time.",
" ",
" crash> gcore -v 1 1234 -v 1",
" Usage: gcore",
" gcore [-v vlevel] [-f filter] [pid | taskp]*",
" gcore -d",
" Enter "help gcore" for details.",
" ",
" It is allowed to specify -v and -f options in a different order.",
" ",
" crash> gcore -v 2 5201 -f 21 ffff880126ff9520 5205",
" Saved core.5174",
" Saved core.5217",
" Saved core.5167",
" crash> gcore 5201 ffff880126ff9520 -f 21 5205 -v 2",
" Saved core.5174",
" Saved core.5217",
" Saved core.5167",
" ",
NULL,
};

void
cmd_gcore(void)
{
char c, *foptarg, *voptarg;

if (ACTIVE())
error(FATAL, "no support on live kernel");

gcore_dumpfilter_set_default();
gcore_verbose_set_default();

foptarg = voptarg = NULL;

while ((c = getopt(argcnt, args, "df:v:")) != EOF) {
switch (c) {

case 'f':
if (foptarg)
goto argerr;
foptarg = optarg;
break;
case 'v':
if (voptarg)
goto argerr;
voptarg = optarg;
break;
default:
argerr:
argerrs++;
break;
}
}

if (argerrs) {
cmd_usage(pc->curcmd, SYNOPSIS);
}

if (foptarg) {
ulong value;

if (!decimal(foptarg, 0))
error(FATAL, "filter must be a decimal: %s.
",
foptarg);

value = stol(foptarg, gcore_verbose_error_handle(), NULL);
if (!gcore_dumpfilter_set(value))
error(FATAL, "invalid filter value: %s.
", foptarg);
}

if (voptarg) {
ulong value;

if (!decimal(voptarg, 0))
error(FATAL, "vlevel must be a decimal: %s.
",
voptarg);

value = stol(voptarg, gcore_verbose_error_handle(), NULL);
if (!gcore_verbose_set(value))
error(FATAL, "invalid vlevel: %s.
", voptarg);

}

if (!args[optind]) {
do_gcore(NULL);
return;
}

for (; args[optind]; optind++) {
do_gcore(args[optind]);
free_all_bufs();
}

}

/**
* do_gcore - do process core dump for a given task
*
* @arg string that refers to PID or task context's address
*
* Given the string, arg, refering to PID or task context's address,
* do_gcore tries to do process coredump for the corresponding
* task. If the string given is NULL, do_gcore does the process dump
* for the current task context.
*
* Here is the unique exception point in gcore sub-command. Any fatal
* action during gcore sub-command will come back here. Look carefully
* at how IN_FOREACH is used here.
*
* Dynamic allocation in gcore sub-command fully depends on buffer
* mechanism provided by crash utility. do_gcore() never makes freeing
* operation. Thus, it is necessary to call free_all_bufs() each time
* calling do_gcore(). See the end of cmd_gcore().
*/
static void do_gcore(char *arg)
{
if (!setjmp(pc->foreach_loop_env)) {
struct task_context *tc;
ulong dummy;

pc->flags |= IN_FOREACH;

if (arg) {
if (!IS_A_NUMBER(arg))
error(FATAL, "neither pid nor taskp: %s
",
args[optind]);

if (STR_INVALID == str_to_context(arg, &dummy, &tc))
error(FATAL, "invalid task or pid: %s
",
args[optind]);
} else
tc = CURRENT_CONTEXT();

if (is_kernel_thread(tc->task))
error(FATAL, "The specified task is a kernel thread.
");

do_setup_gcore(tc);
gcore_coredump();
}
pc->flags &= ~IN_FOREACH;
do_clean_gcore();
}

/**
* do_setup_gcore - initialize resources used for process core dump
*
* @tc task context object to be dumped from now on
*
* The resources used for process core dump is characterized by struct
* gcore_data. Look carefully at the definition.
*/
static void do_setup_gcore(struct task_context *tc)
{
gcore->flags = 0UL;
gcore->fd = 0;

if (tc != CURRENT_CONTEXT()) {
gcore->orig = CURRENT_CONTEXT();
(void) set_context(tc->task, tc->pid);
}

snprintf(gcore->corename, CORENAME_MAX_SIZE + 1, "core.%lu.%s",
task_tgid(CURRENT_TASK()), CURRENT_COMM());
}

/**
* do_clean_gcore - clean up resources used for process core dump
*/
static void do_clean_gcore(void)
{
if (gcore->fd > 0)
close(gcore->fd);
if (gcore->flags & GCF_UNDER_COREDUMP) {
if (gcore->flags & GCF_SUCCESS)
fprintf(fp, "Saved %s
", gcore->corename);
else
fprintf(fp, "Failed.
");
}
if (gcore->orig)
(void)set_context(gcore->orig->task, gcore->orig->pid);
}

static void gcore_offset_table_init(void)
{
GCORE_MEMBER_OFFSET_INIT(cpuinfo_x86_x86_capabilit y, "cpuinfo_x86", "x86_capability");
GCORE_MEMBER_OFFSET_INIT(cred_gid, "cred", "gid");
GCORE_MEMBER_OFFSET_INIT(cred_uid, "cred", "uid");
GCORE_MEMBER_OFFSET_INIT(desc_struct_base0, "desc_struct", "base0");
GCORE_MEMBER_OFFSET_INIT(desc_struct_base1, "desc_struct", "base1");
GCORE_MEMBER_OFFSET_INIT(desc_struct_base2, "desc_struct", "base2");
GCORE_MEMBER_OFFSET_INIT(fpu_state, "fpu", "state");
GCORE_MEMBER_OFFSET_INIT(inode_i_nlink, "inode", "i_nlink");
GCORE_MEMBER_OFFSET_INIT(nsproxy_pid_ns, "nsproxy", "pid_ns");
GCORE_MEMBER_OFFSET_INIT(mm_struct_arg_start, "mm_struct", "arg_start");
GCORE_MEMBER_OFFSET_INIT(mm_struct_arg_end, "mm_struct", "arg_end");
GCORE_MEMBER_OFFSET_INIT(mm_struct_map_count, "mm_struct", "map_count");
GCORE_MEMBER_OFFSET_INIT(mm_struct_saved_auxv, "mm_struct", "saved_auxv");
GCORE_MEMBER_OFFSET_INIT(pid_level, "pid", "level");
GCORE_MEMBER_OFFSET_INIT(pid_namespace_level, "pid_namespace", "level");
if (MEMBER_EXISTS("pt_regs", "ax"))
GCORE_MEMBER_OFFSET_INIT(pt_regs_ax, "pt_regs", "ax");
else
GCORE_MEMBER_OFFSET_INIT(pt_regs_ax, "pt_regs", "eax");
if (MEMBER_EXISTS("pt_regs", "bp"))
GCORE_MEMBER_OFFSET_INIT(pt_regs_bp, "pt_regs", "bp");
else
GCORE_MEMBER_OFFSET_INIT(pt_regs_bp, "pt_regs", "ebp");
if (MEMBER_EXISTS("pt_regs", "bx"))
GCORE_MEMBER_OFFSET_INIT(pt_regs_bx, "pt_regs", "bx");
else
GCORE_MEMBER_OFFSET_INIT(pt_regs_bx, "pt_regs", "ebx");
if (MEMBER_EXISTS("pt_regs", "cs"))
GCORE_MEMBER_OFFSET_INIT(pt_regs_cs, "pt_regs", "cs");
else
GCORE_MEMBER_OFFSET_INIT(pt_regs_cs, "pt_regs", "xcs");
if (MEMBER_EXISTS("pt_regs", "cx"))
GCORE_MEMBER_OFFSET_INIT(pt_regs_cx, "pt_regs", "cx");
else
GCORE_MEMBER_OFFSET_INIT(pt_regs_cx, "pt_regs", "ecx");
if (MEMBER_EXISTS("pt_regs", "di"))
GCORE_MEMBER_OFFSET_INIT(pt_regs_di, "pt_regs", "di");
else
GCORE_MEMBER_OFFSET_INIT(pt_regs_di, "pt_regs", "edi");
if (MEMBER_EXISTS("pt_regs", "ds"))
GCORE_MEMBER_OFFSET_INIT(pt_regs_ds, "pt_regs", "ds");
else
GCORE_MEMBER_OFFSET_INIT(pt_regs_ds, "pt_regs", "xds");
if (MEMBER_EXISTS("pt_regs", "dx"))
GCORE_MEMBER_OFFSET_INIT(pt_regs_dx, "pt_regs", "dx");
else
GCORE_MEMBER_OFFSET_INIT(pt_regs_dx, "pt_regs", "edx");
if (MEMBER_EXISTS("pt_regs", "es"))
GCORE_MEMBER_OFFSET_INIT(pt_regs_es, "pt_regs", "es");
else
GCORE_MEMBER_OFFSET_INIT(pt_regs_es, "pt_regs", "xes");
if (MEMBER_EXISTS("pt_regs", "flags"))
GCORE_MEMBER_OFFSET_INIT(pt_regs_flags, "pt_regs", "flags");
else
GCORE_MEMBER_OFFSET_INIT(pt_regs_flags, "pt_regs", "eflags");
GCORE_MEMBER_OFFSET_INIT(pt_regs_fs, "pt_regs", "fs");
GCORE_MEMBER_OFFSET_INIT(pt_regs_gs, "pt_regs", "gs");
if (MEMBER_EXISTS("pt_regs", "ip"))
GCORE_MEMBER_OFFSET_INIT(pt_regs_ip, "pt_regs", "ip");
else
GCORE_MEMBER_OFFSET_INIT(pt_regs_ip, "pt_regs", "eip");
if (MEMBER_EXISTS("pt_regs", "orig_eax"))
GCORE_MEMBER_OFFSET_INIT(pt_regs_orig_ax, "pt_regs", "orig_eax");
else
GCORE_MEMBER_OFFSET_INIT(pt_regs_orig_ax, "pt_regs", "orig_ax");
if (MEMBER_EXISTS("pt_regs", "si"))
GCORE_MEMBER_OFFSET_INIT(pt_regs_si, "pt_regs", "si");
else
GCORE_MEMBER_OFFSET_INIT(pt_regs_si, "pt_regs", "esi");
if (MEMBER_EXISTS("pt_regs", "sp"))
GCORE_MEMBER_OFFSET_INIT(pt_regs_sp, "pt_regs", "sp");
else
GCORE_MEMBER_OFFSET_INIT(pt_regs_sp, "pt_regs", "esp");
if (MEMBER_EXISTS("pt_regs", "ss"))
GCORE_MEMBER_OFFSET_INIT(pt_regs_ss, "pt_regs", "ss");
else
GCORE_MEMBER_OFFSET_INIT(pt_regs_ss, "pt_regs", "xss");
GCORE_MEMBER_OFFSET_INIT(pt_regs_xfs, "pt_regs", "xfs");
GCORE_MEMBER_OFFSET_INIT(pt_regs_xgs, "pt_regs", "xgs");
GCORE_MEMBER_OFFSET_INIT(sched_entity_sum_exec_run time, "sched_entity", "sum_exec_runtime");
GCORE_MEMBER_OFFSET_INIT(signal_struct_cutime, "signal_struct", "cutime");
GCORE_MEMBER_OFFSET_INIT(signal_struct_pgrp, "signal_struct", "pgrp");
GCORE_MEMBER_OFFSET_INIT(signal_struct_session, "signal_struct", "session");
GCORE_MEMBER_OFFSET_INIT(signal_struct_stime, "signal_struct", "stime");
GCORE_MEMBER_OFFSET_INIT(signal_struct_sum_sched_r untime, "signal_struct", "sum_sched_runtime");
GCORE_MEMBER_OFFSET_INIT(signal_struct_utime, "signal_struct", "utime");
GCORE_MEMBER_OFFSET_INIT(task_struct_cred, "task_struct", "cred");
GCORE_MEMBER_OFFSET_INIT(task_struct_gid, "task_struct", "gid");
GCORE_MEMBER_OFFSET_INIT(task_struct_group_leader, "task_struct", "group_leader");
GCORE_MEMBER_OFFSET_INIT(task_struct_real_cred, "task_struct", "real_cred");
if (MEMBER_EXISTS("task_struct", "real_parent"))
GCORE_MEMBER_OFFSET_INIT(task_struct_real_parent, "task_struct", "real_parent");
else if (MEMBER_EXISTS("task_struct", "parent"))
GCORE_MEMBER_OFFSET_INIT(task_struct_real_parent, "task_struct", "parent");
GCORE_MEMBER_OFFSET_INIT(task_struct_se, "task_struct", "se");
GCORE_MEMBER_OFFSET_INIT(task_struct_static_prio, "task_struct", "static_prio");
GCORE_MEMBER_OFFSET_INIT(task_struct_uid, "task_struct", "uid");
GCORE_MEMBER_OFFSET_INIT(task_struct_used_math, "task_struct", "used_math");
GCORE_MEMBER_OFFSET_INIT(thread_info_status, "thread_info", "status");
GCORE_MEMBER_OFFSET_INIT(thread_struct_ds, "thread_struct", "ds");
GCORE_MEMBER_OFFSET_INIT(thread_struct_es, "thread_struct", "es");
GCORE_MEMBER_OFFSET_INIT(thread_struct_fs, "thread_struct", "fs");
GCORE_MEMBER_OFFSET_INIT(thread_struct_fsindex, "thread_struct", "fsindex");
GCORE_MEMBER_OFFSET_INIT(thread_struct_fpu, "thread_struct", "fpu");
GCORE_MEMBER_OFFSET_INIT(thread_struct_gs, "thread_struct", "gs");
GCORE_MEMBER_OFFSET_INIT(thread_struct_gsindex, "thread_struct", "gsindex");
GCORE_MEMBER_OFFSET_INIT(thread_struct_i387, "thread_struct", "i387");
GCORE_MEMBER_OFFSET_INIT(thread_struct_tls_array, "thread_struct", "tls_array");
if (MEMBER_EXISTS("thread_struct", "usersp"))
GCORE_MEMBER_OFFSET_INIT(thread_struct_usersp, "thread_struct", "usersp");
else if (MEMBER_EXISTS("thread_struct", "userrsp"))
GCORE_MEMBER_OFFSET_INIT(thread_struct_usersp, "thread_struct", "userrsp");
if (MEMBER_EXISTS("thread_struct", "xstate"))
GCORE_MEMBER_OFFSET_INIT(thread_struct_xstate, "thread_struct", "xstate");
else if (MEMBER_EXISTS("thread_struct", "i387"))
GCORE_MEMBER_OFFSET_INIT(thread_struct_xstate, "thread_struct", "i387");
GCORE_MEMBER_OFFSET_INIT(thread_struct_io_bitmap_m ax, "thread_struct", "io_bitmap_max");
GCORE_MEMBER_OFFSET_INIT(thread_struct_io_bitmap_p tr, "thread_struct", "io_bitmap_ptr");
GCORE_MEMBER_OFFSET_INIT(user_regset_n, "user_regset", "n");
GCORE_MEMBER_OFFSET_INIT(vm_area_struct_anon_vma, "vm_area_struct", "anon_vma");

if (symbol_exists("_cpu_pda"))
GCORE_MEMBER_OFFSET_INIT(x8664_pda_oldrsp, "x8664_pda", "oldrsp");
}

static void gcore_size_table_init(void)
{
GCORE_STRUCT_SIZE_INIT(i387_union, "i387_union");
GCORE_MEMBER_SIZE_INIT(mm_struct_saved_auxv, "mm_struct", "saved_auxv");
GCORE_MEMBER_SIZE_INIT(thread_struct_fs, "thread_struct", "fs");
GCORE_MEMBER_SIZE_INIT(thread_struct_fsindex, "thread_struct", "fsindex");
GCORE_MEMBER_SIZE_INIT(thread_struct_gs, "thread_struct", "gs");
GCORE_MEMBER_SIZE_INIT(thread_struct_gsindex, "thread_struct", "gsindex");
GCORE_MEMBER_SIZE_INIT(thread_struct_tls_array, "thread_struct", "tls_array");
GCORE_STRUCT_SIZE_INIT(thread_xstate, "thread_xstate");
GCORE_MEMBER_SIZE_INIT(vm_area_struct_anon_vma, "vm_area_struct", "anon_vma");

}

#ifdef GCORE_TEST

char *help_gcore_test[] = {
"gcore_test",
"gcore_test - test gcore",
"
"
" ",
NULL,
};

void cmd_gcore_test(void)
{
char *message = NULL;

#define TEST_MODULE(test)
message = test();
if (message)
fprintf(fp, #test ": %s
", message);

TEST_MODULE(gcore_x86_test);
TEST_MODULE(gcore_coredump_table_test);
TEST_MODULE(gcore_dumpfilter_test);

if (!message)
fprintf(fp, "All test cases are successfully passed
");

#undef TEST_MODULE
}

#endif /* GCORE_TEST */
#
# Copyright (C) 2010 FUJITSU LIMITED
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#

ifeq ($(shell arch), i686)
TARGET=X86
TARGET_CFLAGS=-D_FILE_OFFSET_BITS=64
endif

ifeq ($(shell arch), x86_64)
TARGET=X86_64
TARGET_CFLAGS=
endif

ifeq ($(shell /bin/ls /usr/include/crash/defs.h 2>/dev/null), /usr/include/crash/defs.h)
INCDIR=/usr/include/crash
endif
ifeq ($(shell /bin/ls ./defs.h 2> /dev/null), ./defs.h)
INCDIR=.
endif
ifeq ($(shell /bin/ls ../defs.h 2> /dev/null), ../defs.h)
INCDIR=..
endif

GCORE_CFILES =
libgcore/gcore_coredump.c
libgcore/gcore_coredump_table.c
libgcore/gcore_dumpfilter.c
libgcore/gcore_global_data.c
libgcore/gcore_regset.c
libgcore/gcore_verbose.c

ifneq (,$(findstring $(TARGET), X86 X86_64))
GCORE_CFILES += libgcore/gcore_x86.c
endif

GCORE_OFILES = $(patsubst %.c,%.o,$(GCORE_CFILES))

COMMON_CFLAGS=-Wall -I$(INCDIR) -I./libgcore -fPIC -D$(TARGET)

all: gcore.so

gcore.so: $(INCDIR)/defs.h gcore.c $(GCORE_OFILES)
gcc $(TARGET_CFLAGS) $(COMMON_CFLAGS) -nostartfiles -shared -rdynamic $(GCORE_OFILES) -o gcore.so gcore.c

%.o: %.c $(INCDIR)/defs.h
gcc $(TARGET_CFLAGS) $(COMMON_CFLAGS) -c -o $@ $<

clean:
find ./libgcore -regex ".+(o|so)" -exec rm -f {} ;

/* gcore_coredump.c -- core analysis suite
*
* Copyright (C) 2010 FUJITSU LIMITED
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/

#include <defs.h>
#include <gcore_defs.h>
#include <elf.h>

static void fill_prstatus(struct elf_prstatus *prstatus, ulong task,
const struct thread_group_list *tglist);
static void fill_psinfo(struct elf_prpsinfo *psinfo, ulong task);
static void fill_auxv_note(struct memelfnote *note, ulong task);
static int fill_thread_group(struct thread_group_list **tglist);
static void fill_headers(Elf_Ehdr *elf, Elf_Shdr *shdr0, int phnum,
uint16_t e_machine, uint32_t e_flags,
uint8_t ei_osabi);
static void fill_thread_core_info(struct elf_thread_core_info *t,
const struct user_regset_view *view,
size_t *total,
struct thread_group_list *tglist);
static int fill_note_info(struct elf_note_info *info,
struct thread_group_list *tglist, Elf_Ehdr *elf,
Elf_Shdr *shdr0, int phnum);
static void fill_note(struct memelfnote *note, const char *name, int type,
unsigned int sz, void *data);

static int notesize(struct memelfnote *en);
static void alignfile(int fd, off_t *foffset);
static void write_elf_note_phdr(int fd, size_t size, off_t *offset);
static void writenote(struct memelfnote *men, int fd, off_t *foffset);
static void write_note_info(int fd, struct elf_note_info *info, off_t *foffset);
static size_t get_note_info_size(struct elf_note_info *info);
static ulong next_vma(ulong this_vma);

static inline int thread_group_leader(ulong task);

void gcore_coredump(void)
{
struct thread_group_list *tglist = NULL;
struct elf_note_info info;
Elf_Ehdr elf;
Elf_Shdr shdr0;
int map_count, phnum;
ulong vma, index, mmap;
off_t offset, foffset, dataoff;
char *mm_cache, *buffer = NULL;

gcore->flags |= GCF_UNDER_COREDUMP;

mm_cache = fill_mm_struct(task_mm(CURRENT_TASK(), TRUE));
if (!mm_cache)
error(FATAL, "The user memory space does not exist.
");

mmap = ULONG(mm_cache + OFFSET(mm_struct_mmap));
map_count = INT(mm_cache + GCORE_OFFSET(mm_struct_map_count));

progressf("Restoring the thread group ...
");
fill_thread_group(&tglist);
progressf("done.
");

phnum = map_count;
phnum++; /* for note information */

progressf("Retrieving note information ...
");
fill_note_info(&info, tglist, &elf, &shdr0, phnum);
progressf("done.
");

progressf("Opening file %s ...
", gcore->corename);
gcore->fd = open(gcore->corename, O_WRONLY|O_TRUNC|O_CREAT,
S_IRUSR|S_IWUSR);
if (gcore->fd < 0)
error(FATAL, "%s: open: %s
", gcore->corename,
strerror(errno));
progressf("done.
");

progressf("Writing ELF header ...
");
if (write(gcore->fd, &elf, sizeof(elf)) != sizeof(elf))
error(FATAL, "%s: write: %s
", gcore->corename,
strerror(errno));
progressf(" done.
");

if (elf.e_shoff) {
progressf("Writing section header table ...
");
if (write(gcore->fd, &shdr0, sizeof(shdr0)) != sizeof(shdr0))
error(FATAL, "%s: gcore: %s
", gcore->corename,
strerror(errno));
progressf("done.
");
}

offset = elf.e_ehsize +
(elf.e_phnum == PN_XNUM ? elf.e_shnum * elf.e_shentsize : 0) +
phnum * elf.e_phentsize;
foffset = offset;

progressf("Writing PT_NOTE program header ...
");
write_elf_note_phdr(gcore->fd, get_note_info_size(&info), &offset);
progressf("done.
");

dataoff = offset = roundup(offset, ELF_EXEC_PAGESIZE);

progressf("Writing PT_LOAD program headers ...
");
FOR_EACH_VMA_OBJECT(vma, index, mmap) {
char *vma_cache;
ulong vm_start, vm_end, vm_flags;
Elf_Phdr phdr;

vma_cache = fill_vma_cache(vma);
vm_start = ULONG(vma_cache + OFFSET(vm_area_struct_vm_start));
vm_end = ULONG(vma_cache + OFFSET(vm_area_struct_vm_end));
vm_flags = ULONG(vma_cache + OFFSET(vm_area_struct_vm_flags));

phdr.p_type = PT_LOAD;
phdr.p_offset = offset;
phdr.p_vaddr = vm_start;
phdr.p_paddr = 0;
phdr.p_filesz = gcore_dumpfilter_vma_dump_size(vma);
phdr.p_memsz = vm_end - vm_start;
phdr.p_flags = vm_flags & VM_READ ? PF_R : 0;
if (vm_flags & VM_WRITE)
phdr.p_flags |= PF_W;
if (vm_flags & VM_EXEC)
phdr.p_flags |= PF_X;
phdr.p_align = ELF_EXEC_PAGESIZE;

offset += phdr.p_filesz;

if (write(gcore->fd, &phdr, sizeof(phdr)) != sizeof(phdr))
error(FATAL, "%s: write, %s
", gcore->corename,
strerror(errno));
}
progressf("done.
");

progressf("Writing PT_NOTE segment ...
");
write_note_info(gcore->fd, &info, &foffset);
progressf("done.
");

buffer = GETBUF(PAGE_SIZE);
BZERO(buffer, PAGE_SIZE);

{
size_t len;

len = dataoff - foffset;
if ((size_t)write(gcore->fd, buffer, len) != len)
error(FATAL, "%s: write: %s
", gcore->corename,
strerror(errno));
}

progressf("Writing PT_LOAD segment ...
");
FOR_EACH_VMA_OBJECT(vma, index, mmap) {
ulong addr, end, vm_start;

vm_start = ULONG(fill_vma_cache(vma) +
OFFSET(vm_area_struct_vm_start));

end = vm_start + gcore_dumpfilter_vma_dump_size(vma);

progressf("PT_LOAD[%lu]: %lx - %lx
", index, vm_start, end);

for (addr = vm_start; addr < end; addr += PAGE_SIZE) {
physaddr_t paddr;

if (uvtop(CURRENT_CONTEXT(), addr, &paddr, FALSE)) {
readmem(paddr, PHYSADDR, buffer, PAGE_SIZE,
"readmem vma list",
gcore_verbose_error_handle());
} else {
pagefaultf("page fault at %lx
", addr);
BZERO(buffer, PAGE_SIZE);
}

if (write(gcore->fd, buffer, PAGE_SIZE) != PAGE_SIZE)
error(FATAL, "%s: write: %s
", gcore->corename,
strerror(errno));

}
}
progressf("done.
");

gcore->flags |= GCF_SUCCESS;

}

static inline int
thread_group_leader(ulong task)
{
ulong group_leader;

readmem(task + GCORE_OFFSET(task_struct_group_leader), KVADDR,
&group_leader, sizeof(group_leader),
"thread_group_leader: group_leader",
gcore_verbose_error_handle());

return task == group_leader;
}

static int
fill_thread_group(struct thread_group_list **tglist)
{
ulong i;
struct task_context *tc;
struct thread_group_list *l;
const uint tgid = task_tgid(CURRENT_TASK());
const ulong lead_pid = CURRENT_PID();

tc = FIRST_CONTEXT();
l = NULL;
for (i = 0; i < RUNNING_TASKS(); i++, tc++) {
if (task_tgid(tc->task) == tgid) {
struct thread_group_list *new;

new = (struct thread_group_list *)
GETBUF(sizeof(struct thread_group_list));
new->task = tc->task;
if (tc->pid == lead_pid || !l) {
new->next = l;
l = new;
} else if (l) {
new->next = l->next;
l->next = new;
}
}
}
*tglist = l;

return 1;
}

static int
task_nice(ulong task)
{
int static_prio;

readmem(task + GCORE_OFFSET(task_struct_static_prio), KVADDR,
&static_prio, sizeof(static_prio), "task_nice: static_prio",
gcore_verbose_error_handle());

return PRIO_TO_NICE(static_prio);
}

static void
fill_psinfo(struct elf_prpsinfo *psinfo, ulong task)
{
ulong arg_start, arg_end, parent;
physaddr_t paddr;
long state, uid, gid;
unsigned int i, len;
char *mm_cache;

/* first copy the parameters from user space */
BZERO(psinfo, sizeof(struct elf_prpsinfo));

mm_cache = fill_mm_struct(task_mm(task, FALSE));

arg_start = ULONG(mm_cache + GCORE_OFFSET(mm_struct_arg_start));
arg_end = ULONG(mm_cache + GCORE_OFFSET(mm_struct_arg_end));

len = arg_end - arg_start;
if (len >= ELF_PRARGSZ)
len = ELF_PRARGSZ-1;
if (uvtop(CURRENT_CONTEXT(), arg_start, &paddr, FALSE)) {
readmem(paddr, PHYSADDR, &psinfo->pr_psargs, len,
"fill_psinfo: pr_psargs", gcore_verbose_error_handle());
} else {
pagefaultf("page fault at %lx
", arg_start);
}
for(i = 0; i < len; i++)
if (psinfo->pr_psargs[i] == 0)
psinfo->pr_psargs[i] = ' ';
psinfo->pr_psargs[len] = 0;

readmem(task + GCORE_OFFSET(task_struct_real_parent), KVADDR,
&parent, sizeof(parent), "fill_psinfo: real_parent",
gcore_verbose_error_handle());

psinfo->pr_ppid = ggt->task_pid(parent);
psinfo->pr_pid = ggt->task_pid(task);
psinfo->pr_pgrp = ggt->task_pgrp(task);
psinfo->pr_sid = ggt->task_session(task);

readmem(task + OFFSET(task_struct_state), KVADDR, &state, sizeof(state),
"fill_psinfo: state", gcore_verbose_error_handle());

i = state ? ffz(~state) + 1 : 0;
psinfo->pr_state = i;
psinfo->pr_sname = (i > 5) ? '.' : "RSDTZW"[i];
psinfo->pr_zomb = psinfo->pr_sname == 'Z';

psinfo->pr_nice = task_nice(task);

readmem(task + OFFSET(task_struct_flags), KVADDR, &psinfo->pr_flag,
sizeof(psinfo->pr_flag), "fill_psinfo: flags",
gcore_verbose_error_handle());

uid = ggt->task_uid(task);
gid = ggt->task_gid(task);

SET_UID(psinfo->pr_uid, (uid_t)uid);
SET_GID(psinfo->pr_gid, (gid_t)gid);

readmem(task + OFFSET(task_struct_comm), KVADDR, &psinfo->pr_fname,
TASK_COMM_LEN, "fill_psinfo: comm",
gcore_verbose_error_handle());

}

static void
fill_headers(Elf_Ehdr *elf, Elf_Shdr *shdr0, int phnum, uint16_t e_machine,
uint32_t e_flags, uint8_t ei_osabi)
{
BZERO(elf, sizeof(Elf_Ehdr));
BCOPY(ELFMAG, elf->e_ident, SELFMAG);
elf->e_ident[EI_CLASS] = ELF_CLASS;
elf->e_ident[EI_DATA] = ELF_DATA;
elf->e_ident[EI_VERSION] = EV_CURRENT;
elf->e_ident[EI_OSABI] = ei_osabi;
elf->e_ehsize = sizeof(Elf_Ehdr);
elf->e_phentsize = sizeof(Elf_Phdr);
elf->e_phnum = phnum >= PN_XNUM ? PN_XNUM : phnum;
if (elf->e_phnum == PN_XNUM) {
elf->e_shoff = elf->e_ehsize;
elf->e_shentsize = sizeof(Elf_Shdr);
elf->e_shnum = 1;
elf->e_shstrndx = SHN_UNDEF;
}
elf->e_type = ET_CORE;
elf->e_machine = e_machine;
elf->e_version = EV_CURRENT;
elf->e_phoff = sizeof(Elf_Ehdr) + elf->e_shentsize * elf->e_shnum;
elf->e_flags = e_flags;

if (elf->e_phnum == PN_XNUM) {
BZERO(shdr0, sizeof(Elf_Shdr));
shdr0->sh_type = SHT_NULL;
shdr0->sh_size = elf->e_shnum;
shdr0->sh_link = elf->e_shstrndx;
shdr0->sh_info = phnum;
}

}

static void
fill_thread_core_info(struct elf_thread_core_info *t,
const struct user_regset_view *view, size_t *total,
struct thread_group_list *tglist)
{
unsigned int i;

/* NT_PRSTATUS is the one special case, because the regset data
* goes into the pr_reg field inside the note contents, rather
* than being the whole note contents. We fill the reset in here.
* We assume that regset 0 is NT_PRSTATUS.
*/
fill_prstatus(&t->prstatus, t->task, tglist);
view->regsets[0].get(task_to_context(t->task), &view->regsets[0],
sizeof(t->prstatus.pr_reg), &t->prstatus.pr_reg);

fill_note(&t->notes[0], "CORE", NT_PRSTATUS,
sizeof(t->prstatus), &t->prstatus);
*total += notesize(&t->notes[0]);

if (view->regsets[0].writeback)
view->regsets[0].writeback(task_to_context(t->task),
&view->regsets[0], 1);

for (i = 1; i < view->n; ++i) {
const struct user_regset *regset = &view->regsets[i];
void *data;

if (regset->writeback)
regset->writeback(task_to_context(t->task), regset, 1);
if (!regset->core_note_type)
continue;
if (regset->active &&
!regset->active(task_to_context(t->task), regset))
continue;
data = (void *)GETBUF(regset->size);
if (!regset->get(task_to_context(t->task), regset, regset->size,
data))
continue;
if (regset->callback)
regset->callback(t, regset);

fill_note(&t->notes[i], regset->name, regset->core_note_type,
regset->size, data);
*total += notesize(&t->notes[i]);
}

}

static int
fill_note_info(struct elf_note_info *info, struct thread_group_list *tglist,
Elf_Ehdr *elf, Elf_Shdr *shdr0, int phnum)
{
const struct user_regset_view *view = task_user_regset_view();
struct thread_group_list *l;
struct elf_thread_core_info *t;
struct elf_prpsinfo *psinfo = NULL;
ulong dump_task;
unsigned int i;

info->size = 0;
info->thread = NULL;

psinfo = (struct elf_prpsinfo *)GETBUF(sizeof(struct elf_prpsinfo));
fill_note(&info->psinfo, "CORE", NT_PRPSINFO,
sizeof(struct elf_prpsinfo), psinfo);

info->thread_notes = 0;
for (i = 0; i < view->n; i++)
if (view->regsets[i].core_note_type != 0)
++info->thread_notes;

/* Sanity check. We rely on regset 0 being in NT_PRSTATUS,
* since it is our one special case.
*/
if (info->thread_notes == 0 ||
view->regsets[0].core_note_type != NT_PRSTATUS)
error(FATAL, "regset 0 is _not_ NT_PRSTATUS
");

fill_headers(elf, shdr0, phnum, view->e_machine, view->e_flags,
view->ei_osabi);

/* head task is always a dump target */
dump_task = tglist->task;

for (l = tglist; l; l = l->next) {
struct elf_thread_core_info *new;
size_t entry_size;

entry_size = offsetof(struct elf_thread_core_info,
notes[info->thread_notes]);
new = (struct elf_thread_core_info *)GETBUF(entry_size);
BZERO(new, entry_size);
new->task = l->task;
if (!info->thread || l->task == dump_task) {
new->next = info->thread;
info->thread = new;
} else {
/* keep dump_task in the head position */
new->next = info->thread->next;
info->thread->next = new;
}
}

for (t = info->thread; t; t = t->next)
fill_thread_core_info(t, view, &info->size, tglist);

/*
* Fill in the two process-wide notes.
*/
fill_psinfo(psinfo, dump_task);
info->size += notesize(&info->psinfo);

fill_auxv_note(&info->auxv, dump_task);
info->size += notesize(&info->auxv);

return 0;
}

static int
notesize(struct memelfnote *en)
{
int sz;

sz = sizeof(Elf_Nhdr);
sz += roundup(strlen(en->name) + 1, 4);
sz += roundup(en->datasz, 4);

return sz;
}

static void
fill_note(struct memelfnote *note, const char *name, int type, unsigned int sz,
void *data)
{
note->name = name;
note->type = type;
note->datasz = sz;
note->data = data;
return;
}

static void
alignfile(int fd, off_t *foffset)
{
static const char buffer[4] = {};
const size_t len = roundup(*foffset, 4) - *foffset;

if ((size_t)write(fd, buffer, len) != len)
error(FATAL, "%s: write %s
", gcore->corename,
strerror(errno));
*foffset += (off_t)len;
}

static void
writenote(struct memelfnote *men, int fd, off_t *foffset)
{
const Elf_Nhdr en = {
.n_namesz = strlen(men->name) + 1,
.n_descsz = men->datasz,
.n_type = men->type,
};

if (write(fd, &en, sizeof(en)) != sizeof(en))
error(FATAL, "%s: write %s
", gcore->corename,
strerror(errno));
*foffset += sizeof(en);

if (write(fd, men->name, en.n_namesz) != en.n_namesz)
error(FATAL, "%s: write %s
", gcore->corename,
strerror(errno));
*foffset += en.n_namesz;

alignfile(fd, foffset);

if (write(fd, men->data, men->datasz) != men->datasz)
error(FATAL, "%s: write %s
", gcore->corename,
strerror(errno));
*foffset += men->datasz;

alignfile(fd, foffset);

}

static void
write_note_info(int fd, struct elf_note_info *info, off_t *foffset)
{
int first = 1;
struct elf_thread_core_info *t = info->thread;

do {
int i;

writenote(&t->notes[0], fd, foffset);

if (first) {
writenote(&info->psinfo, fd, foffset);
writenote(&info->auxv, fd, foffset);
}

for (i = 1; i < info->thread_notes; ++i)
if (t->notes[i].data)
writenote(&t->notes[i], fd, foffset);

first = 0;
t = t->next;
} while (t);

}

static size_t
get_note_info_size(struct elf_note_info *info)
{
return info->size;
}

static ulong next_vma(ulong this_vma)
{
return ULONG(fill_vma_cache(this_vma) + OFFSET(vm_area_struct_vm_next));
}

static void
write_elf_note_phdr(int fd, size_t size, off_t *offset)
{
Elf_Phdr phdr;

BZERO(&phdr, sizeof(phdr));

phdr.p_type = PT_NOTE;
phdr.p_offset = *offset;
phdr.p_filesz = size;

*offset += size;

if (write(fd, &phdr, sizeof(phdr)) != sizeof(phdr))
error(FATAL, "%s: write: %s
", gcore->corename,
strerror(errno));

}

static void
fill_prstatus(struct elf_prstatus *prstatus, ulong task,
const struct thread_group_list *tglist)
{
ulong pending_signal_sig0, blocked_sig0, real_parent, group_leader,
signal, cutime, cstime;

/* The type of (sig[0]) is unsigned long. */
readmem(task + OFFSET(task_struct_pending) + OFFSET(sigpending_signal),
KVADDR, &pending_signal_sig0, sizeof(unsigned long),
"fill_prstatus: sigpending_signal_sig",
gcore_verbose_error_handle());

readmem(task + OFFSET(task_struct_blocked), KVADDR, &blocked_sig0,
sizeof(unsigned long), "fill_prstatus: blocked_sig0",
gcore_verbose_error_handle());

readmem(task + OFFSET(task_struct_parent), KVADDR, &real_parent,
sizeof(real_parent), "fill_prstatus: real_parent",
gcore_verbose_error_handle());

readmem(task + GCORE_OFFSET(task_struct_group_leader), KVADDR,
&group_leader, sizeof(group_leader),
"fill_prstatus: group_leader", gcore_verbose_error_handle());

prstatus->pr_info.si_signo = prstatus->pr_cursig = 0;
prstatus->pr_sigpend = pending_signal_sig0;
prstatus->pr_sighold = blocked_sig0;
prstatus->pr_ppid = ggt->task_pid(real_parent);
prstatus->pr_pid = ggt->task_pid(task);
prstatus->pr_pgrp = ggt->task_pgrp(task);
prstatus->pr_sid = ggt->task_session(task);
if (thread_group_leader(task)) {
struct task_cputime cputime;

/*
* This is the record for the group leader. It shows the
* group-wide total, not its individual thread total.
*/
ggt->thread_group_cputime(task, tglist, &cputime);
cputime_to_timeval(cputime.utime, &prstatus->pr_utime);
cputime_to_timeval(cputime.stime, &prstatus->pr_stime);
} else {
cputime_t utime, stime;

readmem(task + OFFSET(task_struct_utime), KVADDR, &utime,
sizeof(utime), "task_struct utime", gcore_verbose_error_handle());

readmem(task + OFFSET(task_struct_stime), KVADDR, &stime,
sizeof(stime), "task_struct stime", gcore_verbose_error_handle());

cputime_to_timeval(utime, &prstatus->pr_utime);
cputime_to_timeval(stime, &prstatus->pr_stime);
}

readmem(task + OFFSET(task_struct_signal), KVADDR, &signal,
sizeof(signal), "task_struct signal", gcore_verbose_error_handle());

readmem(task + GCORE_OFFSET(signal_struct_cutime), KVADDR,
&cutime, sizeof(cutime), "signal_struct cutime",
gcore_verbose_error_handle());

readmem(task + GCORE_OFFSET(signal_struct_cutime), KVADDR,
&cstime, sizeof(cstime), "signal_struct cstime",
gcore_verbose_error_handle());

cputime_to_timeval(cutime, &prstatus->pr_cutime);
cputime_to_timeval(cstime, &prstatus->pr_cstime);

}

static void
fill_auxv_note(struct memelfnote *note, ulong task)
{
ulong *auxv;
int i;

auxv = (ulong *)GETBUF(GCORE_SIZE(mm_struct_saved_auxv));

readmem(task_mm(task, FALSE) +
GCORE_OFFSET(mm_struct_saved_auxv), KVADDR, auxv,
GCORE_SIZE(mm_struct_saved_auxv), "fill_auxv_note",
gcore_verbose_error_handle());

i = 0;
do
i += 2;
while (auxv[i] != AT_NULL);

fill_note(note, "CORE", NT_AUXV, i * sizeof(ulong), auxv);

}
/* gcore_coredump_table.c -- core analysis suite
*
* Copyright (C) 2010 FUJITSU LIMITED
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/

#include <defs.h>
#include <gcore_defs.h>

static unsigned int get_inode_i_nlink_v0(ulong file);
static unsigned int get_inode_i_nlink_v19(ulong file);
static pid_t pid_nr_ns(ulong pid, ulong ns);
static int pid_alive(ulong task);
static int __task_pid_nr_ns(ulong task, enum pid_type type);
static inline pid_t task_pid(ulong task);
static inline pid_t process_group(ulong task);
static inline pid_t task_session(ulong task);
static inline pid_t task_pid_vnr(ulong task);
static inline pid_t task_pgrp_vnr(ulong task);
static inline pid_t task_session_vnr(ulong task);
static void
thread_group_cputime_v0(ulong task, const struct thread_group_list *threads,
struct task_cputime *cputime);
static void
thread_group_cputime_v22(ulong task, const struct thread_group_list *threads,
struct task_cputime *cputime);
static inline __kernel_uid_t task_uid_v0(ulong task);
static inline __kernel_uid_t task_uid_v28(ulong task);
static inline __kernel_gid_t task_gid_v0(ulong task);
static inline __kernel_gid_t task_gid_v28(ulong task);

void gcore_coredump_table_init(void)
{
/*
* struct path was introduced at v2.6.19, where f_dentry
* member of struct file was replaced by f_path member.
*
* See vfs_init() to know why this condition is chosen.
*
* See commit 0f7fc9e4d03987fe29f6dd4aa67e4c56eb7ecb05.
*/
if (VALID_MEMBER(file_f_path))
ggt->get_inode_i_nlink = get_inode_i_nlink_v19;
else
ggt->get_inode_i_nlink = get_inode_i_nlink_v0;

/*
* task_pid_vnr() and relevant helpers were introduced at
* v2.6.23, while pid_namespace itself was introduced prior to
* that at v2.6.19.
*
* We've choosed here the former commit because implemented
* enough to provide pid facility was the period when the
* former patches were committed.
*
* We've chosen symbol ``pid_nr_ns' because it is just a
* unique function that is not defined as static inline.
*
* See commit 7af5729474b5b8ad385adadab78d6e723e7655a3.
*/
if (symbol_exists("pid_nr_ns")) {
ggt->task_pid = task_pid_vnr;
ggt->task_pgrp = task_pgrp_vnr;
ggt->task_session = task_session_vnr;
} else {
ggt->task_pid = task_pid;
ggt->task_pgrp = process_group;
ggt->task_session = task_session;
}

/*
* The way of tracking cputime changed when CFS was introduced
* at v2.6.23, which can be distinguished by checking whether
* se member of task_struct structure exist or not.
*
* See commit 20b8a59f2461e1be911dce2cfafefab9d22e4eee.
*/
if (GCORE_VALID_MEMBER(task_struct_se))
ggt->thread_group_cputime = thread_group_cputime_v22;
else
ggt->thread_group_cputime = thread_group_cputime_v0;

/*
* Credidentials feature was introduced at v2.6.28 where uid
* and gid members were moved into cred member of struct
* task_struct that was newly introduced.
*
* See commit b6dff3ec5e116e3af6f537d4caedcad6b9e5082a.
*/
if (GCORE_VALID_MEMBER(task_struct_cred)) {
ggt->task_uid = task_uid_v28;
ggt->task_gid = task_gid_v28;
} else {
ggt->task_uid = task_uid_v0;
ggt->task_gid = task_gid_v0;
}

}

static unsigned int get_inode_i_nlink_v0(ulong file)
{
ulong d_entry, d_inode;
unsigned int i_nlink;

readmem(file + OFFSET(file_f_dentry), KVADDR, &d_entry, sizeof(d_entry),
"get_inode_i_nlink_v0: d_entry", gcore_verbose_error_handle());

readmem(d_entry + OFFSET(dentry_d_inode), KVADDR, &d_inode,
sizeof(d_inode), "get_inode_i_nlink_v0: d_inode",
gcore_verbose_error_handle());

readmem(d_inode + GCORE_OFFSET(inode_i_nlink), KVADDR, &i_nlink,
sizeof(i_nlink), "get_inode_i_nlink_v0: i_nlink",
gcore_verbose_error_handle());

return i_nlink;
}

static unsigned int get_inode_i_nlink_v19(ulong file)
{
ulong d_entry, d_inode;
unsigned int i_nlink;

readmem(file + OFFSET(file_f_path) + OFFSET(path_dentry), KVADDR,
&d_entry, sizeof(d_entry), "get_inode_i_nlink_v19: d_entry",
gcore_verbose_error_handle());

readmem(d_entry + OFFSET(dentry_d_inode), KVADDR, &d_inode, sizeof(d_inode),
"get_inode_i_nlink_v19: d_inode", gcore_verbose_error_handle());

readmem(d_inode + GCORE_OFFSET(inode_i_nlink), KVADDR, &i_nlink,
sizeof(i_nlink), "get_inode_i_nlink_v19: i_nlink",
gcore_verbose_error_handle());

return i_nlink;
}

static inline pid_t
task_pid(ulong task)
{
return task_to_context(task)->pid;
}

static inline pid_t
process_group(ulong task)
{
ulong signal;
pid_t pgrp;

readmem(task + OFFSET(task_struct_signal), KVADDR, &signal,
sizeof(signal), "process_group: signal", gcore_verbose_error_handle());

readmem(signal + GCORE_OFFSET(signal_struct_pgrp), KVADDR, &pgrp,
sizeof(pgrp), "process_group: pgrp", gcore_verbose_error_handle());

return pgrp;
}

static inline pid_t
task_session(ulong task)
{
ulong signal;
pid_t session;

readmem(task + OFFSET(task_struct_signal), KVADDR, &signal,
sizeof(signal), "process_group: signal", gcore_verbose_error_handle());

readmem(signal + GCORE_OFFSET(signal_struct_session), KVADDR,
&session, sizeof(session), "task_session: session",
gcore_verbose_error_handle());

return session;
}

static pid_t
pid_nr_ns(ulong pid, ulong ns)
{
ulong upid;
unsigned int ns_level, pid_level;
pid_t nr = 0;

readmem(ns + GCORE_OFFSET(pid_namespace_level), KVADDR, &ns_level,
sizeof(ns_level), "pid_nr_ns: ns_level", gcore_verbose_error_handle());

readmem(pid + GCORE_OFFSET(pid_level), KVADDR, &pid_level,
sizeof(pid_level), "pid_nr_ns: pid_level", gcore_verbose_error_handle());

if (pid && ns_level <= pid_level) {
ulong upid_ns;

upid = pid + OFFSET(pid_numbers) + SIZE(upid) * ns_level;

readmem(upid + OFFSET(upid_ns), KVADDR, &upid_ns,
sizeof(upid_ns), "pid_nr_ns: upid_ns",
gcore_verbose_error_handle());

if (upid_ns == ns)
readmem(upid + OFFSET(upid_nr), KVADDR, &nr,
sizeof(ulong), "pid_nr_ns: upid_nr",
gcore_verbose_error_handle());
}

return nr;
}

static int
__task_pid_nr_ns(ulong task, enum pid_type type)
{
ulong nsproxy, ns;
int nr = 0;

readmem(task + OFFSET(task_struct_nsproxy), KVADDR, &nsproxy,
sizeof(nsproxy), "__task_pid_nr_ns: nsproxy",
gcore_verbose_error_handle());

readmem(nsproxy + GCORE_OFFSET(nsproxy_pid_ns), KVADDR, &ns,
sizeof(ns), "__task_pid_nr_ns: ns", gcore_verbose_error_handle());

if (pid_alive(task)) {
ulong pids_type_pid;

if (type != PIDTYPE_PID)
readmem(task + MEMBER_OFFSET("task_struct",
"group_leader"),
KVADDR, &task, sizeof(ulong),
"__task_pid_nr_ns: group_leader",
gcore_verbose_error_handle());

readmem(task + OFFSET(task_struct_pids) + type * SIZE(pid_link)
+ OFFSET(pid_link_pid), KVADDR, &pids_type_pid,
sizeof(pids_type_pid),
"__task_pid_nr_ns: pids_type_pid", gcore_verbose_error_handle());

nr = pid_nr_ns(pids_type_pid, ns);
}

return nr;
}

static inline pid_t
task_pid_vnr(ulong task)
{
return __task_pid_nr_ns(task, PIDTYPE_PID);
}

static inline pid_t
task_pgrp_vnr(ulong task)
{
return __task_pid_nr_ns(task, PIDTYPE_PGID);
}

static inline pid_t
task_session_vnr(ulong task)
{
return __task_pid_nr_ns(task, PIDTYPE_SID);
}

static void
thread_group_cputime_v0(ulong task, const struct thread_group_list *threads,
struct task_cputime *cputime)
{
ulong signal;
ulong utime, signal_utime, stime, signal_stime;

readmem(task + OFFSET(task_struct_signal), KVADDR, &signal,
sizeof(signal), "thread_group_cputime_v0: signal",
gcore_verbose_error_handle());

readmem(task + OFFSET(task_struct_utime), KVADDR, &utime,
sizeof(utime), "thread_group_cputime_v0: utime",
gcore_verbose_error_handle());

readmem(signal + GCORE_OFFSET(signal_struct_utime), KVADDR,
&signal_utime, sizeof(signal_utime),
"thread_group_cputime_v0: signal_utime",
gcore_verbose_error_handle());

readmem(task + OFFSET(task_struct_stime), KVADDR, &stime,
sizeof(stime), "thread_group_cputime_v0: stime",
gcore_verbose_error_handle());

readmem(signal + GCORE_OFFSET(signal_struct_stime), KVADDR,
&signal_stime, sizeof(signal_stime),
"thread_group_cputime_v0: signal_stime",
gcore_verbose_error_handle());

cputime->utime = utime + signal_utime;
cputime->stime = stime + signal_stime;
cputime->sum_exec_runtime = 0;

}

static void
thread_group_cputime_v22(ulong task, const struct thread_group_list *threads,
struct task_cputime *times)
{
const struct thread_group_list *t;
ulong sighand, signal, signal_utime, signal_stime;
uint64_t sum_sched_runtime;

*times = INIT_CPUTIME;

readmem(task + OFFSET(task_struct_sighand), KVADDR, &sighand,
sizeof(sighand), "thread_group_cputime_v22: sighand",
gcore_verbose_error_handle());

if (!sighand)
goto out;

readmem(task + OFFSET(task_struct_signal), KVADDR, &signal,
sizeof(signal), "thread_group_cputime_v22: signal",
gcore_verbose_error_handle());

for (t = threads; t; t = t->next) {
ulong utime, stime;
uint64_t sum_exec_runtime;

readmem(t->task + OFFSET(task_struct_utime), KVADDR, &utime,
sizeof(utime), "thread_group_cputime_v22: utime",
gcore_verbose_error_handle());

readmem(t->task + OFFSET(task_struct_stime), KVADDR, &stime,
sizeof(stime), "thread_group_cputime_v22: stime",
gcore_verbose_error_handle());

readmem(t->task + GCORE_OFFSET(task_struct_se) +
GCORE_OFFSET(sched_entity_sum_exec_runtime), KVADDR,
&sum_exec_runtime, sizeof(sum_exec_runtime),
"thread_group_cputime_v22: sum_exec_runtime",
gcore_verbose_error_handle());

times->utime = cputime_add(times->utime, utime);
times->stime = cputime_add(times->stime, stime);
times->sum_exec_runtime += sum_exec_runtime;
}

readmem(signal + GCORE_OFFSET(signal_struct_utime), KVADDR,
&signal_utime, sizeof(signal_utime),
"thread_group_cputime_v22: signal_utime", gcore_verbose_error_handle());

readmem(signal + GCORE_OFFSET(signal_struct_stime), KVADDR,
&signal_stime, sizeof(signal_stime),
"thread_group_cputime_v22: signal_stime", gcore_verbose_error_handle());

readmem(signal + GCORE_OFFSET(signal_struct_sum_sched_runtime),
KVADDR, &sum_sched_runtime, sizeof(sum_sched_runtime),
"thread_group_cputime_v22: sum_sched_runtime",
gcore_verbose_error_handle());

times->utime = cputime_add(times->utime, signal_utime);
times->stime = cputime_add(times->stime, signal_stime);
times->sum_exec_runtime += sum_sched_runtime;

out:
return;
}

static inline __kernel_uid_t
task_uid_v0(ulong task)
{
__kernel_uid_t uid;

readmem(task + GCORE_OFFSET(task_struct_uid), KVADDR, &uid,
sizeof(uid), "task_uid_v0: uid", gcore_verbose_error_handle());

return uid;
}

static inline __kernel_uid_t
task_uid_v28(ulong task)
{
ulong cred;
__kernel_uid_t uid;

readmem(task + GCORE_OFFSET(task_struct_real_cred), KVADDR, &cred,
sizeof(cred), "task_uid_v28: real_cred", gcore_verbose_error_handle());

readmem(cred + GCORE_OFFSET(cred_uid), KVADDR, &uid, sizeof(uid),
"task_uid_v28: uid", gcore_verbose_error_handle());

return uid;
}

static inline __kernel_gid_t
task_gid_v0(ulong task)
{
__kernel_gid_t gid;

readmem(task + GCORE_OFFSET(task_struct_gid), KVADDR, &gid,
sizeof(gid), "task_gid_v0: gid", gcore_verbose_error_handle());

return gid;
}

static inline __kernel_gid_t
task_gid_v28(ulong task)
{
ulong cred;
__kernel_gid_t gid;

readmem(task + GCORE_OFFSET(task_struct_real_cred), KVADDR, &cred,
sizeof(cred), "task_gid_v28: real_cred", gcore_verbose_error_handle());

readmem(cred + GCORE_OFFSET(cred_gid), KVADDR, &gid, sizeof(gid),
"task_gid_v28: gid", gcore_verbose_error_handle());

return gid;
}

static int
pid_alive(ulong task)
{
pid_t pid;

readmem(task + OFFSET(task_struct_pids) + PIDTYPE_PID * SIZE(pid_link)
+ OFFSET(pid_link_pid), KVADDR, &pid, sizeof(pid), "pid_alive",
gcore_verbose_error_handle());

return !!pid;
}

#ifdef GCORE_TEST

char *gcore_coredump_table_test(void)
{
int test_i_nlink, test_pid, test_pgrp, test_session, test_cputime, test_uid, test_gid;

if (gcore_is_rhel4()) {
test_i_nlink = ggt->get_inode_i_nlink == get_inode_i_nlink_v0;
test_pid = ggt->task_pid == task_pid;
test_pgrp = ggt->task_pgrp == process_group;
test_session = ggt->task_session == task_session;
test_cputime = ggt->thread_group_cputime == thread_group_cputime_v0;
test_uid = ggt->task_uid == task_uid_v0;
test_gid = ggt->task_gid == task_gid_v0;
} else if (gcore_is_rhel5()) {
test_i_nlink = ggt->get_inode_i_nlink == get_inode_i_nlink_v0;
test_pid = ggt->task_pid == task_pid;
test_pgrp = ggt->task_pgrp == process_group;
test_session = ggt->task_session == task_session;
test_cputime = ggt->thread_group_cputime == thread_group_cputime_v0;
test_uid = ggt->task_uid == task_uid_v0;
test_gid = ggt->task_gid == task_gid_v0;
} else if (gcore_is_rhel6()) {
test_i_nlink = ggt->get_inode_i_nlink == get_inode_i_nlink_v19;
test_pid = ggt->task_pid == task_pid_vnr;
test_pgrp = ggt->task_pgrp == task_pgrp_vnr;
test_session = ggt->task_session == task_session_vnr;
test_cputime = ggt->thread_group_cputime == thread_group_cputime_v22;
test_uid = ggt->task_uid == task_uid_v28;
test_gid = ggt->task_gid == task_gid_v28;
} else if (THIS_KERNEL_VERSION == LINUX(2,6,36)) {
test_i_nlink = ggt->get_inode_i_nlink == get_inode_i_nlink_v19;
test_pid = ggt->task_pid == task_pid_vnr;
test_pgrp = ggt->task_pgrp == task_pgrp_vnr;
test_session = ggt->task_session == task_session_vnr;
test_cputime = ggt->thread_group_cputime == thread_group_cputime_v22;
test_uid = ggt->task_uid == task_uid_v28;
test_gid = ggt->task_gid == task_gid_v28;
}

mu_assert("ggt->get_inode_i_nlink has wrongly been registered", test_i_nlink);
mu_assert("ggt->task_pid has wrongly been registered", test_pid);
mu_assert("ggt->task_pgrp has wrongly been registered", test_pgrp);
mu_assert("ggt->task_session has wrongly been registered", test_session);
mu_assert("ggt->thread_group_cputime has wrongly been registered", test_cputime);
mu_assert("ggt->task_uid has wrongly been registered", test_uid);
mu_assert("ggt->task_gid has wrongly been registered", test_gid);

return NULL;
}

#endif /* GCORE_TEST */
/* gcore_defs.h -- core analysis suite
*
* Copyright (C) 2010 FUJITSU LIMITED
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef GCORE_DEFS_H_
#define GCORE_DEFS_H_

#define PN_XNUM 0xffff

#define ELF_CORE_EFLAGS 0

#ifdef X86_64
#define ELF_EXEC_PAGESIZE 4096

#define ELF_MACHINE EM_X86_64
#define ELF_OSABI ELFOSABI_NONE

#define ELF_CLASS ELFCLASS64
#define ELF_DATA ELFDATA2LSB
#define ELF_ARCH EM_X86_64

#define Elf_Half Elf64_Half
#define Elf_Word Elf64_Word
#define Elf_Off Elf64_Off

#define Elf_Ehdr Elf64_Ehdr
#define Elf_Phdr Elf64_Phdr
#define Elf_Shdr Elf64_Shdr
#define Elf_Nhdr Elf64_Nhdr
#elif X86
#define ELF_EXEC_PAGESIZE 4096

#define ELF_MACHINE EM_386
#define ELF_OSABI ELFOSABI_NONE

#define ELF_CLASS ELFCLASS32
#define ELF_DATA ELFDATA2LSB
#define ELF_ARCH EM_386

#define Elf_Half Elf32_Half
#define Elf_Word Elf32_Word
#define Elf_Off Elf32_Off

#define Elf_Ehdr Elf32_Ehdr
#define Elf_Phdr Elf32_Phdr
#define Elf_Shdr Elf32_Shdr
#define Elf_Nhdr Elf32_Nhdr
#endif

/*
* gcore_regset.c
*
* The regset interface is fully borrowed from the library with the
* same name in kernel used in the implementation of collecting note
* information. See include/regset.h in detail.
*/
struct user_regset;
struct task_context;
struct elf_thread_core_info;

/**
* user_regset_active_fn - type of @active function in &struct user_regset
* @target: thread being examined
* @regset: task context being examined
*
* Return TRUE if there is an interesting resource.
* Return FALSE otherwise.
*/
typedef int user_regset_active_fn(struct task_context *target,
const struct user_regset *regset);

/**
* user_regset_get_fn - type of @get function in &struct user_regset
* @target: task context being examined
* @regset: regset being examined
* @size: amount of data to copy, in bytes
* @buf: if a user-space pointer to copy into
*
* Fetch register values. Return TRUE on success and FALSE otherwise.
* The @size is in bytes.
*/
typedef int user_regset_get_fn(struct task_context *target,
const struct user_regset *regset,
unsigned int size,
void *buf);

/**
* user_regset_writeback_fn - type of @writeback function in &struct user_regset
* @target: thread being examined
* @regset: regset being examined
* @immediate: zero if writeback at completion of next context switch is OK
*
* This call is optional; usually the pointer is %NULL.
*
* Return TRUE on success or FALSE otherwise.
*/
typedef int user_regset_writeback_fn(struct task_context *target,
const struct user_regset *regset,
int immediate);

/**
* user_regset_callback_fn - type of @callback function in &struct user_regset
* @t: thread core information being gathered
* @regset: regset being examined
*
* Edit another piece of information contained in @t in terms of @regset.
* This call is optional; the pointer is %NULL if there is no requirement to
* edit.
*/
typedef void user_regset_callback_fn(struct elf_thread_core_info *t,
const struct user_regset *regset);

/**
* struct user_regset - accessible thread CPU state
* @size: Size in bytes of a slot (register).
* @core_note_type: ELF note @n_type value used in core dumps.
* @get: Function to fetch values.
* @active: Function to report if regset is active, or %NULL.
*
* @name: Note section name.
* @callback: Function to edit thread core information, or %NULL.
*
* This data structure describes machine resource to be retrieved as
* process core dump. Each member of this structure characterizes the
* resource and the operations necessary in core dump process.
*
* @get provides a means of retrieving the corresponding resource;
* @active provides a means of checking if the resource exists;
* @writeback performs some architecture-specific operation to make it
* reflect the current actual state; @size means a size of the machine
* resource in bytes; @core_note_type is a type of note information;
* @name is a note section name representing the owner originator that
* handles this kind of the machine resource; @callback is an extra
* operation to edit another note information of the same thread,
* required when the machine resource is collected.
*/
struct user_regset {
user_regset_get_fn *get;
user_regset_active_fn *active;
user_regset_writeback_fn *writeback;
unsigned int size;
unsigned int core_note_type;
char *name;
user_regset_callback_fn *callback;
};

/**
* struct user_regset_view - available regsets
* @name: Identifier, e.g. UTS_MACHINE string.
* @regsets: Array of @n regsets available in this view.
* @n: Number of elements in @regsets.
* @e_machine: ELF header @e_machine %EM_* value written in core dumps.
* @e_flags: ELF header @e_flags value written in core dumps.
* @ei_osabi: ELF header @e_ident[%EI_OSABI] value written in core dumps.
*
* A regset view is a collection of regsets (&struct user_regset,
* above). This describes all the state of a thread that are
* collected as note information of process core dump.
*/
struct user_regset_view {
const char *name;
const struct user_regset *regsets;
unsigned int n;
uint32_t e_flags;
uint16_t e_machine;
uint8_t ei_osabi;
};

/**
* task_user_regset_view - Return the process's regset view.
*
* Return the &struct user_regset_view. By default, it returns
* &gcore_default_regset_view.
*
* This is defined as a weak symbol. If there's another
* task_user_regset_view at linking time, it is used instead, useful
* to support different kernel version or architecture.
*/
extern const struct user_regset_view *task_user_regset_view(void);
extern void gcore_default_regsets_init(void);

#if X86
#define REGSET_VIEW_NAME "i386"
#define REGSET_VIEW_MACHINE EM_386
#elif X86_64
#define REGSET_VIEW_NAME "x86_64"
#define REGSET_VIEW_MACHINE EM_X86_64
#elif IA64
#define REGSET_VIEW_NAME "ia64"
#define REGSET_VIEW_MACHINE EM_IA_64
#endif

/*
* gcore_dumpfilter.c
*/
extern int gcore_dumpfilter_set(ulong filter);
extern void gcore_dumpfilter_set_default(void);
extern ulong gcore_dumpfilter_vma_dump_size(ulong vma);

/*
* gcore_verbose.c
*/
#define VERBOSE_PROGRESS 0x1
#define VERBOSE_NONQUIET 0x2
#define VERBOSE_PAGEFAULT 0x4
#define VERBOS
 

Thread Tools




All times are GMT. The time now is 05:21 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org