diff options
author | Steven Rostedt (VMware) <rostedt@goodmis.org> | 2017-04-03 18:57:35 +0200 |
---|---|---|
committer | Steven Rostedt (VMware) <rostedt@goodmis.org> | 2017-04-03 20:04:00 +0200 |
commit | b80f0f6c9ed3958ff4002b6135f43a1ef312a610 (patch) | |
tree | c30589b7e985125ef81a56e84aef47cb21207378 /mm/page_alloc.c | |
parent | ftrace: Create separate t_func_next() to simplify the function / hash logic (diff) | |
download | linux-b80f0f6c9ed3958ff4002b6135f43a1ef312a610.tar.xz linux-b80f0f6c9ed3958ff4002b6135f43a1ef312a610.zip |
ftrace: Have init/main.c call ftrace directly to free init memory
Relying on free_reserved_area() to call ftrace to free init memory proved to
not be sufficient. The issue is that on x86, when debug_pagealloc is
enabled, the init memory is not freed, but simply set as not present. Since
ftrace was uninformed of this, starting function tracing still tries to
update pages that are not present according to the page tables, causing
ftrace to bug, as well as killing the kernel itself.
Instead of relying on free_reserved_area(), have init/main.c call ftrace
directly just before it frees the init memory. Then it needs to use
__init_begin and __init_end to know where the init memory location is.
Looking at all archs (and testing what I can), it appears that this should
work for each of them.
Reported-by: kernel test robot <xiaolong.ye@intel.com>
Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Diffstat (limited to 'mm/page_alloc.c')
-rw-r--r-- | mm/page_alloc.c | 3 |
1 files changed, 0 insertions, 3 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index eee82bfb7cd8..178bf9c2a2cb 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -6606,9 +6606,6 @@ unsigned long free_reserved_area(void *start, void *end, int poison, char *s) void *pos; unsigned long pages = 0; - /* This may be .init text, inform ftrace to remove it */ - ftrace_free_mem(start, end); - start = (void *)PAGE_ALIGN((unsigned long)start); end = (void *)((unsigned long)end & PAGE_MASK); for (pos = start; pos < end; pos += PAGE_SIZE, pages++) { |