内存管理相关最后一个重要章节
给用户态进程分配内存时,与内核的情况不同:
- 进程对动态内存的请求被认为是不紧迫的,内核总是尽量推迟给用户态进程分配动态内存
- 由于用户态进程是不可信任的,内核必须随时准备捕获用户态进程引起的寻址错误
当用户态进程请求动态内存时,并没有获得请求的页框,仅仅获得了对一个新的线性地址区间的使用权,二者一线性地址区间就成为了进程地址空间的一部分,这一部分叫做线性区。线性区就是一段连续的线性地址(虚拟地址)。
书中从进程的地址空间开始介绍,我自己感觉从线性区开始自底向上比较好理解。
线性区
Linux通过类型vm_area_struct的对象实现线性区,可以对着书中的注释和实际代码进行对照。
/*
* This struct defines a memory VMM memory area. There is one of these
* per VM-area/task. A VM area is any part of the process virtual memory
* space that has a special rule for the page-fault handlers (ie a shared
* library, the executable area etc).
*/
struct vm_area_struct {
struct mm_struct * vm_mm; /* The address space we belong to. */
unsigned long vm_start; /* Our start address within vm_mm. */
unsigned long vm_end; /* The first byte after our end address
within vm_mm. */
/* linked list of VM areas per task, sorted by address */
struct vm_area_struct *vm_next;
pgprot_t vm_page_prot; /* Access permissions of this VMA. */
unsigned long vm_flags; /* Flags, listed below. */
struct rb_node vm_rb;
/*
* For areas with an address space and backing store,
* linkage into the address_space->i_mmap prio tree, or
* linkage to the list of like vmas hanging off its node, or
* linkage of vma in the address_space->i_mmap_nonlinear list.
*/
union {
struct {
struct list_head list;
void *parent; /* aligns with prio_tree_node parent */
struct vm_area_struct *head;
} vm_set;
struct prio_tree_node prio_tree_node;
} shared;
/*
* A file's MAP_PRIVATE vma can be in both i_mmap tree and anon_vma
* list, after a COW of one of the file pages. A MAP_SHARED vma
* can only be in the i_mmap tree. An anonymous MAP_PRIVATE, stack
* or brk vma (with NULL file) can only be in an anon_vma list.
*/
struct list_head anon_vma_node; /* Serialized by anon_vma->lock */
struct anon_vma *anon_vma; /* Serialized by page_table_lock */
/* Function pointers to deal with this struct. */
struct vm_operations_struct * vm_ops;
/* Information about our backing store: */
unsigned long vm_pgoff; /* Offset (within vm_file) in PAGE_SIZE
units, *not* PAGE_CACHE_SIZE */
struct file * vm_file; /* File we map to (can be NULL). */
void * vm_private_data; /* was vm_pte (shared mem) */
#ifdef CONFIG_NUMA
struct mempolicy *vm_policy; /* NUMA policy for the VMA */
#endif
};
主要有几个字段重点提一下:
- vm_start线性区的起始位置,也就是线性区包含的第一个线性地址
- vm_end线性区之外的第一个地址,即vm_start – vm_end就可以算出线性区长度
- vm_mm指向拥有这个区间的进程的内存描述符(之后再介绍)
- vm_rb用于红黑树的数据(下文会提到)
- vm_flags线性区标志
一个进程不可能用一段连续的虚拟地址就能表示完所有它使用的用户态空间,所以肯定是需要多个线性区的,进程拥有的线性区从来不重叠,并且内核尽力把新分配的线性区与相邻的现有线性区进行合并(访问权限相同就合并)。
进程为了记录所有拥有的线性区,把一段一段线性区整合起来最简单的方式就使用链表,但一旦线性区的数量非常多的时候,在链表中查找/插入/删除元素的开销就会逐渐增大。因此,Linux2.6引入了红黑树,借助vm_mm与vm_rb构建了红黑树,一般来说,用红黑树来确定含有地址的线性区,而链表通常在扫描整个线性区集合时来使用。
线性区的访问权限
这里再重提一下分页结构(一般阅读,一边串起来之前的知识),二级目录结构如下图(图片来自:https://zhuanlan.zhihu.com/p/467528036)。
因为页框和页的大小都是4096的倍数,所以页目录项与页表项的低12位地址是无用的,用来存页的一系列标志。
所有线性区的页相关标志也存在此处,具体参考书籍即可。
但vm_flags并不能直接转换成页上的标志位,这里书中写了不少情况,就举一个我觉得较为重要的例子:
之前进程的分享里,有提到过写时复制,即两个进程访问了同一页数据,此时,无论哪个进程改动该页都是不允许的。
线性区的处理
mmap会进行地址空间的映射,线性区本质上也是构建后映射,mmap底层调用的就是分配线性地址区间的do_mmap()函数。
先介绍一下实现do_mmap()的辅助函数:
- find_vma() 查找给定地址的最邻近区
/* Look up the first VMA which satisfies addr < vm_end, NULL if none. */
struct vm_area_struct * find_vma(struct mm_struct * mm, unsigned long addr)
{
struct vm_area_struct *vma = NULL;
if (mm) {
/* Check the cache first. */
/* (Cache hit rate is typically around 35%.) */
vma = mm->mmap_cache;
if (!(vma && vma->vm_end > addr && vma->vm_start <= addr)) {
struct rb_node * rb_node;
rb_node = mm->mm_rb.rb_node;
vma = NULL;
while (rb_node) {
struct vm_area_struct * vma_tmp;
vma_tmp = rb_entry(rb_node,
struct vm_area_struct, vm_rb);
if (vma_tmp->vm_end > addr) {
vma = vma_tmp;
if (vma_tmp->vm_start <= addr)
break;
rb_node = rb_node->rb_left;
} else
rb_node = rb_node->rb_right;
}
if (vma)
mm->mmap_cache = vma;
}
}
return vma;
}
这段函数主要就是找第一个vm_end大于addr的线性区。
- find_vma_intersection()查找一个与给定的地址区间相重叠的线性区
/* Look up the first VMA which intersects the interval start_addr..end_addr-1,
NULL if none. Assume start_addr < end_addr. */
static inline struct vm_area_struct * find_vma_intersection(struct mm_struct * mm, unsigned long start_addr, unsigned long end_addr)
{
struct vm_area_struct * vma = find_vma(mm,start_addr);
if (vma && end_addr <= vma->vm_start)
vma = NULL;
return vma;
}
很好理解。
- get_unmapped_area()查找一个空闲的地址区间
unsigned long
get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
unsigned long pgoff, unsigned long flags)
{
if (flags & MAP_FIXED) {
unsigned long ret;
if (addr > TASK_SIZE - len)
return -ENOMEM;
if (addr & ~PAGE_MASK)
return -EINVAL;
if (file && is_file_hugepages(file)) {
/*
* Check if the given range is hugepage aligned, and
* can be made suitable for hugepages.
*/
ret = prepare_hugepage_range(addr, len);
} else {
/*
* Ensure that a normal request is not falling in a
* reserved hugepage range. For some archs like IA-64,
* there is a separate region for hugepages.
*/
ret = is_hugepage_only_range(addr, len);
}
if (ret)
return -EINVAL;
return addr;
}
if (file && file->f_op && file->f_op->get_unmapped_area)
return file->f_op->get_unmapped_area(file, addr, len,
pgoff, flags);
return current->mm->get_unmapped_area(file, addr, len, pgoff, flags);
}
最底下通过函数指针执行不同策略,选择阅读最基本的arch_get_unmapped_area()。
unsigned long
arch_get_unmapped_area(struct file *filp, unsigned long addr,
unsigned long len, unsigned long pgoff, unsigned long flags)
{
struct mm_struct *mm = current->mm;
struct vm_area_struct *vma;
unsigned long start_addr;
// 判断需要的线性区大小
if (len > TASK_SIZE)
return -ENOMEM;
// 如果addr不为0,即从addr开始尝试分配空间
if (addr) {
// 对齐
addr = PAGE_ALIGN(addr);
vma = find_vma(mm, addr);
// 可以直接分配成功,可以画个图捋一下
if (TASK_SIZE - len >= addr &&
(!vma || addr + len <= vma->vm_start))
return addr;
}
// 搜索从最近被分配的线性区后面的线性地址开始找,若找不到合适的,从用户态线性地址空间的三分之一处开始找,即free_area_cache处
start_addr = addr = mm->free_area_cache;
full_search:
for (vma = find_vma(mm, addr); ; vma = vma->vm_next) {
/* At this point: (!vma || addr < vma->vm_end). */
if (TASK_SIZE - len < addr) {
/*
* Start a new search - just in case we missed
* some holes.
*/
if (start_addr != TASK_UNMAPPED_BASE) {
start_addr = addr = TASK_UNMAPPED_BASE;
goto full_search;
}
return -ENOMEM;
}
if (!vma || addr + len <= vma->vm_start) {
/*
* Remember the place where we stopped the search:
*/
mm->free_area_cache = addr + len;
return addr;
}
addr = vma->vm_end;
}
}
在代码里标注了注释。
- insert_vm_struct()向链表中插入一个线性区
主要是需要在两个数据结构中插入数据:
- 往线性区链表中插入
- 往内存描述符(线性区内部有成员变量指向内存描述符)红黑树中插入
- do_mmap()分配线性地址区间
辅助函数都介绍完了,就可以正式分配线性地址区间了。
do_mmap位当前进程创建并初始化一个新的线性区,分配成功后,可以把这个新的线性区与进程已有的其他线性区进行合并。
static inline unsigned long do_mmap(struct file *file, unsigned long addr,
unsigned long len, unsigned long prot,
unsigned long flag, unsigned long offset)
{
unsigned long ret = -EINVAL;
if ((offset + PAGE_ALIGN(len)) < offset)
goto out;
if (!(offset & ~PAGE_MASK))
ret = do_mmap_pgoff(file, addr, len, prot, flag, offset >> PAGE_SHIFT);
out:
return ret;
}
do_mmap的参数情况:
- file: 如果新的线性区将要把一个文件映射到内存,则要用文件描述符file和文件偏移offset,如不需要,则file和offset不考虑都为空;
- addr: 指定从哪里开始查找空闲区间,一般都是NULL即由内核指定;
- len: 要求的线性地址空间长度;
- prot: 指定线性区下的页的访问权限;
- flag: 指定线性区的其他标志;
do_mmap它的核心函数是do_mmap_pgoff,这是个非常大的函数,我参考了http://kerneltravel.net/blog/2020/lp_817/。
- do_munmap()释放线性地址区间
主要考虑两点:
- 如果一个线性去不需要全部释放,则需要对线性区切割,只释放一部分
- 释放对应的页框,更新页表
内存描述符
与进程地址空间有关的信息全都包含在一个叫做内存描述符的数据结构中(mm_struct),进程描述符中的mm字段就指向这个结构。
struct mm_struct {
struct vm_area_struct * mmap; /* list of VMAs */
struct rb_root mm_rb;
struct vm_area_struct * mmap_cache; /* last find_vma result */
unsigned long (*get_unmapped_area) (struct file *filp,
unsigned long addr, unsigned long len,
unsigned long pgoff, unsigned long flags);
void (*unmap_area) (struct vm_area_struct *area);
unsigned long mmap_base; /* base of mmap area */
unsigned long free_area_cache; /* first hole */
pgd_t * pgd;
atomic_t mm_users; /* How many users with user space? */
atomic_t mm_count; /* How many references to "struct mm_struct" (users count as 1) */
int map_count; /* number of VMAs */
struct rw_semaphore mmap_sem;
spinlock_t page_table_lock; /* Protects task page tables and mm->rss */
struct list_head mmlist; /* List of all active mm's. These are globally strung
* together off init_mm.mmlist, and are protected
* by mmlist_lock
*/
unsigned long start_code, end_code, start_data, end_data;
unsigned long start_brk, brk, start_stack;
unsigned long arg_start, arg_end, env_start, env_end;
unsigned long rss, total_vm, locked_vm, shared_vm;
unsigned long exec_vm, stack_vm, reserved_vm, def_flags;
unsigned long saved_auxv[42]; /* for /proc/PID/auxv */
unsigned dumpable:1;
cpumask_t cpu_vm_mask;
/* Architecture-specific MM context */
mm_context_t context;
/* Token based thrashing protection. */
unsigned long swap_token_time;
char recent_pagein;
/* coredumping support */
int core_waiters;
struct completion *core_startup_done, core_done;
/* aio bits */
rwlock_t ioctx_list_lock;
struct kioctx *ioctx_list;
struct kioctx default_kioctx;
};
由于上文已经介绍了线性区,对照书本即下图可以理解该数据结构了(图片来自https://terenceli.github.io/%E6%8A%80%E6%9C%AF/2014/10/10/linux-process-vm)。
注意这里的mmap_cache是最后一次引用的线性区对象,不是最后一个线性区对象,因为考虑到局部性,可能会连续访问同一个线性区对象,所以有了这个成员变量。
缺页异常处理程序
需要区分到底是尚未分配物理页框还是编程错误导致的缺页,通过函数do_page_fault()可以处理对应的异常。
/*
* This routine handles page faults. It determines the address,
* and the problem, and then passes it off to one of the appropriate
* routines.
*
* error_code:
* bit 0 == 0 means no page found, 1 means protection fault
* bit 1 == 0 means read, 1 means write
* bit 2 == 0 means kernel, 1 means user-mode
* bit 3 == 1 means fault was an instruction fetch
*/
asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long error_code)
{
struct task_struct *tsk;
struct mm_struct *mm;
struct vm_area_struct * vma;
unsigned long address;
const struct exception_table_entry *fixup;
int write;
siginfo_t info;
#ifdef CONFIG_CHECKING
{
unsigned long gs;
struct x8664_pda *pda = cpu_pda + stack_smp_processor_id();
rdmsrl(MSR_GS_BASE, gs);
if (gs != (unsigned long)pda) {
wrmsrl(MSR_GS_BASE, pda);
printk("page_fault: wrong gs %lx expected %p\n", gs, pda);
}
}
#endif
/* get the address */
__asm__("movq %%cr2,%0":"=r" (address));
if (likely(regs->eflags & X86_EFLAGS_IF))
local_irq_enable();
if (unlikely(page_fault_trace))
printk("pagefault rip:%lx rsp:%lx cs:%lu ss:%lu address %lx error %lx\n",
regs->rip,regs->rsp,regs->cs,regs->ss,address,error_code);
tsk = current;
mm = tsk->mm;
info.si_code = SEGV_MAPERR;
/*
* We fault-in kernel-space virtual memory on-demand. The
* 'reference' page table is init_mm.pgd.
*
* NOTE! We MUST NOT take any locks for this case. We may
* be in an interrupt or a critical region, and should
* only copy the information from the master page table,
* nothing more.
*
* This verifies that the fault happens in kernel space
* (error_code & 4) == 0, and that the fault was not a
* protection error (error_code & 1) == 0.
*/
if (unlikely(address >= TASK_SIZE)) {
if (!(error_code & 5))
goto vmalloc_fault;
/*
* Don't take the mm semaphore here. If we fixup a prefetch
* fault we could otherwise deadlock.
*/
goto bad_area_nosemaphore;
}
if (unlikely(error_code & (1 << 3)))
goto page_table_corruption;
/*
* If we're in an interrupt or have no user
* context, we must not take the fault..
*/
if (unlikely(in_atomic() || !mm))
goto bad_area_nosemaphore;
again:
/* When running in the kernel we expect faults to occur only to
* addresses in user space. All other faults represent errors in the
* kernel and should generate an OOPS. Unfortunatly, in the case of an
* erroneous fault occuring in a code path which already holds mmap_sem
* we will deadlock attempting to validate the fault against the
* address space. Luckily the kernel only validly references user
* space from well defined areas of code, which are listed in the
* exceptions table.
*
* As the vast majority of faults will be valid we will only perform
* the source reference check when there is a possibilty of a deadlock.
* Attempt to lock the address space, if we cannot we then validate the
* source. If this is invalid we can skip the address space check,
* thus avoiding the deadlock.
*/
if (!down_read_trylock(&mm->mmap_sem)) {
if ((error_code & 4) == 0 &&
!search_exception_tables(regs->rip))
goto bad_area_nosemaphore;
down_read(&mm->mmap_sem);
}
vma = find_vma(mm, address);
if (!vma)
goto bad_area;
if (likely(vma->vm_start <= address))
goto good_area;
if (!(vma->vm_flags & VM_GROWSDOWN))
goto bad_area;
if (error_code & 4) {
// XXX: align red zone size with ABI
if (address + 128 < regs->rsp)
goto bad_area;
}
if (expand_stack(vma, address))
goto bad_area;
/*
* Ok, we have a good vm_area for this memory access, so
* we can handle it..
*/
good_area:
info.si_code = SEGV_ACCERR;
write = 0;
switch (error_code & 3) {
default: /* 3: write, present */
/* fall through */
case 2: /* write, not present */
if (!(vma->vm_flags & VM_WRITE))
goto bad_area;
write++;
break;
case 1: /* read, present */
goto bad_area;
case 0: /* read, not present */
if (!(vma->vm_flags & (VM_READ | VM_EXEC)))
goto bad_area;
}
/*
* If for any reason at all we couldn't handle the fault,
* make sure we exit gracefully rather than endlessly redo
* the fault.
*/
switch (handle_mm_fault(mm, vma, address, write)) {
case 1:
tsk->min_flt++;
break;
case 2:
tsk->maj_flt++;
break;
case 0:
goto do_sigbus;
default:
goto out_of_memory;
}
up_read(&mm->mmap_sem);
return;
/*
* Something tried to access memory that isn't in our memory map..
* Fix it, but check if it's kernel or user first..
*/
bad_area:
up_read(&mm->mmap_sem);
bad_area_nosemaphore:
#ifdef CONFIG_IA32_EMULATION
/* 32bit vsyscall. map on demand. */
if (test_thread_flag(TIF_IA32) &&
address >= 0xffffe000 && address < 0xffffe000 + PAGE_SIZE) {
if (map_syscall32(mm, address) < 0)
goto out_of_memory2;
return;
}
#endif
/* User mode accesses just cause a SIGSEGV */
if (error_code & 4) {
if (is_prefetch(regs, address, error_code))
return;
/* Work around K8 erratum #100 K8 in compat mode
occasionally jumps to illegal addresses >4GB. We
catch this here in the page fault handler because
these addresses are not reachable. Just detect this
case and return. Any code segment in LDT is
compatibility mode. */
if ((regs->cs == __USER32_CS || (regs->cs & (1<<2))) &&
(address >> 32))
return;
if (exception_trace && unhandled_signal(tsk, SIGSEGV)) {
printk(KERN_INFO
"%s[%d]: segfault at %016lx rip %016lx rsp %016lx error %lx\n",
tsk->comm, tsk->pid, address, regs->rip,
regs->rsp, error_code);
}
tsk->thread.cr2 = address;
/* Kernel addresses are always protection faults */
tsk->thread.error_code = error_code | (address >= TASK_SIZE);
tsk->thread.trap_no = 14;
info.si_signo = SIGSEGV;
info.si_errno = 0;
/* info.si_code has been set above */
info.si_addr = (void __user *)address;
force_sig_info(SIGSEGV, &info, tsk);
return;
}
no_context:
/* Are we prepared to handle this kernel fault? */
fixup = search_exception_tables(regs->rip);
if (fixup) {
regs->rip = fixup->fixup;
return;
}
/*
* Hall of shame of CPU/BIOS bugs.
*/
if (is_prefetch(regs, address, error_code))
return;
if (is_errata93(regs, address))
return;
/*
* Oops. The kernel tried to access some bad page. We'll have to
* terminate things with extreme prejudice.
*/
oops_begin();
if (address < PAGE_SIZE)
printk(KERN_ALERT "Unable to handle kernel NULL pointer dereference");
else
printk(KERN_ALERT "Unable to handle kernel paging request");
printk(" at %016lx RIP: \n" KERN_ALERT,address);
printk_address(regs->rip);
printk("\n");
dump_pagetable(address);
__die("Oops", regs, error_code);
/* Executive summary in case the body of the oops scrolled away */
printk(KERN_EMERG "CR2: %016lx\n", address);
oops_end();
do_exit(SIGKILL);
/*
* We ran out of memory, or some other thing happened to us that made
* us unable to handle the page fault gracefully.
*/
out_of_memory:
up_read(&mm->mmap_sem);
out_of_memory2:
if (current->pid == 1) {
yield();
goto again;
}
printk("VM: killing process %s\n", tsk->comm);
if (error_code & 4)
do_exit(SIGKILL);
goto no_context;
do_sigbus:
up_read(&mm->mmap_sem);
/* Kernel mode? Handle exceptions or die */
if (!(error_code & 4))
goto no_context;
tsk->thread.cr2 = address;
tsk->thread.error_code = error_code;
tsk->thread.trap_no = 14;
info.si_signo = SIGBUS;
info.si_errno = 0;
info.si_code = BUS_ADRERR;
info.si_addr = (void __user *)address;
force_sig_info(SIGBUS, &info, tsk);
return;
vmalloc_fault:
{
pgd_t *pgd;
pmd_t *pmd;
pte_t *pte;
/*
* x86-64 has the same kernel 3rd level pages for all CPUs.
* But for vmalloc/modules the TLB synchronization works lazily,
* so it can happen that we get a page fault for something
* that is really already in the page table. Just check if it
* is really there and when yes flush the local TLB.
*/
pgd = pgd_offset_k(address);
if (!pgd_present(*pgd))
goto bad_area_nosemaphore;
pmd = pmd_offset(pgd, address);
if (!pmd_present(*pmd))
goto bad_area_nosemaphore;
pte = pte_offset_kernel(pmd, address);
if (!pte_present(*pte))
goto bad_area_nosemaphore;
__flush_tlb_all();
return;
}
page_table_corruption:
pgtable_bad(address, regs, error_code);
}
error_code又三位组成,具体细节函数最开始的注释有明确说明,可以看到又是个超大函数,一段段看,处理流程如下。
- 一开始先从cr2寄存器中读取引起缺页中断的线性地址,保存在局部变量address中。
* get the address */
__asm__("movq %%cr2,%0":"=r" (address));
- 判断引起缺页的线性地址是否属于第四个GB
/*
* We fault-in kernel-space virtual memory on-demand. The
* 'reference' page table is init_mm.pgd.
*
* NOTE! We MUST NOT take any locks for this case. We may
* be in an interrupt or a critical region, and should
* only copy the information from the master page table,
* nothing more.
*
* This verifies that the fault happens in kernel space
* (error_code & 4) == 0, and that the fault was not a
* protection error (error_code & 1) == 0.
*/
if (unlikely(address >= TASK_SIZE)) {
if (!(error_code & 5))
goto vmalloc_fault;
/*
* Don't take the mm semaphore here. If we fixup a prefetch
* fault we could otherwise deadlock.
*/
goto bad_area_nosemaphore;
}
- 判断缺页中断是否发生在中断处理程序、可延迟函数、临界区或内核线程中
/*
* If we're in an interrupt or have no user
* context, we must not take the fault..
*/
if (unlikely(in_atomic() || !mm))
goto bad_area_nosemaphore;
- 检查引起缺页的线性地址是否包含在进程的地址空间中
if (!down_read_trylock(&mm->mmap_sem)) {
if ((error_code & 4) == 0 &&
!search_exception_tables(regs->rip))
goto bad_area_nosemaphore;
down_read(&mm->mmap_sem);
}
- 拿到读写信号量,搜索错误线性地址所在的线性区
读写信号量数据结构如下
/*
* the rw-semaphore definition
* - if activity is 0 then there are no active readers or writers
* - if activity is +ve then that is the number of active readers
* - if activity is -1 then there is one active writer
* - if wait_list is not empty, then there are processes waiting for the semaphore
*/
struct rw_semaphore {
__s32 activity;
spinlock_t wait_lock;
struct list_head wait_list;
#if RWSEM_DEBUG
int debug;
#endif
};
vma = find_vma(mm, address);
if (!vma)
goto bad_area;
if (likely(vma->vm_start <= address))
goto good_area;
如果address没包含在任何线性区内,goto bad_area,在则goto good_area,如果都不在,这个错误地址可能是由push或pusha指令在进程的用户态堆栈上的操作引起的。
书中详细介绍了栈是如何映射到线性区上的。每个向低地址扩展的栈所在的线性区,他的VM_GROWSDOWN标志被设置,此时vm_start不断减少,vm_end不变:
- 线性区大小是4KB的倍数,但栈的大小是随机的
- 分配给线性区的页框在这个线性区被删除钱不释放,一个栈所在的线性区的vm_start只能减小,执行一系列pop指令时,线性区大小仍不变
所以,当进程填满分配给堆栈的最后一个页框后,进程引起缺页中断,因为push引用了这个线性区以外的一个地址。
此时先检查线性区的VM_GROWSDOWN是否被设置,以及异常是否发生在用户态,如果发生在用户态,判断address是否小于栈顶指针esp(我这里的代码是rsp,位数不一样),并且判断去除到栈顶的预备区域是否还有空间,没有则尝试扩展地址空间,最后统一goto bad_area。
if (!(vma->vm_flags & VM_GROWSDOWN))
goto bad_area;
if (error_code & 4) {
// XXX: align red zone size with ABI
if (address + 128 < regs->rsp)
goto bad_area;
}
if (expand_stack(vma, address))
goto bad_area;
可以注意到如果是内核态页框分配满之后直接扩展地址空间。
处理地址空间以外的错误地址
即阅读bad_area处的代码。
如果错误发生在用户态,则发送一个SIGSEGV信号给current进程并结束函数。
如果发生在内核态还需要区分是由于把某个线性地址作为系统调用的参数传递给内核,还是由真正的内核缺陷引起的,这一块内容要结合别的章节再理解,暂时一头雾水。
处理地址空间内的错误
address明明属于进程的地址空间,如果异常由写访问引起,则需要检查这个线性区是否可写,如果可写,将write局部变量置为1,否则,跳转到bad_area。
如果异常由读或执行访问引起,函数先检查这一页是否已经在RAM中,如果存在,则进程试图访问用户态下的有特权的页框,跳转到bad_area。不存在的情况下,函数还需要检查该线性区是否可读或可执行。
/*
* Ok, we have a good vm_area for this memory access, so
* we can handle it..
*/
good_area:
info.si_code = SEGV_ACCERR;
write = 0;
switch (error_code & 3) {
default: /* 3: write, present */
/* fall through */
case 2: /* write, not present */
if (!(vma->vm_flags & VM_WRITE))
goto bad_area;
write++;
break;
case 1: /* read, present */
goto bad_area;
case 0: /* read, not present */
if (!(vma->vm_flags & (VM_READ | VM_EXEC)))
goto bad_area;
}
/*
* If for any reason at all we couldn't handle the fault,
* make sure we exit gracefully rather than endlessly redo
* the fault.
*/
switch (handle_mm_fault(mm, vma, address, write)) {
case 1:
tsk->min_flt++;
break;
case 2:
tsk->maj_flt++;
break;
case 0:
goto do_sigbus;
default:
goto out_of_memory;
}
up_read(&mm->mmap_sem);
return;
线性区是可读或可执行,且read not present,函数就会调用handle_mm_fault()来给进程分配一个页框,不同返回值对应不同结果:
/*
* Different kinds of faults, as returned by handle_mm_fault().
* Used to decide whether a process gets delivered SIGBUS or
* just gets major/minor fault counters bumped up.
*/
#define VM_FAULT_OOM (-1) // 没有足够的内存
#define VM_FAULT_SIGBUS 0 // 其余错误
#define VM_FAULT_MINOR 1 // 没阻塞当前进程的情况下处理了缺页
#define VM_FAULT_MAJOR 2 // 缺页迫使当前进程睡眠(可能是用磁盘的数据填充页框导致的)
进一步介绍handle_mm_fault()函数(代码可能和书中有略微差别)。
/*
* By the time we get here, we already hold the mm semaphore
*/
int handle_mm_fault(struct mm_struct *mm, struct vm_area_struct * vma,
unsigned long address, int write_access)
{
pgd_t *pgd; // 包含address的页全局目录
pmd_t *pmd;
__set_current_state(TASK_RUNNING);
pgd = pgd_offset(mm, address);
inc_page_state(pgfault);
if (is_vm_hugetlb_page(vma))
return VM_FAULT_SIGBUS; /* mapping truncation does this. */
/*
* We need the page table lock to synchronize with kswapd
* and the SMP-safe atomic PTE updates.
*/
spin_lock(&mm->page_table_lock);
pmd = pmd_alloc(mm, pgd, address);
if (pmd) {
pte_t * pte = pte_alloc_map(mm, pmd, address); // 引用address的页表项
if (pte)
return handle_pte_fault(mm, vma, address, write_access, pte, pmd); // 检查页表项,决定如何分配一个新页框
}
spin_unlock(&mm->page_table_lock);
return VM_FAULT_OOM;
}
handle_pte_falut()检查页表项时:
- 如果访问的页不存在,则该页没放到任何一个页框中,那么内核分配一个新的页框并初始化(请求调页)
- 如果被访问的页存在标记为只读,内核分配一个新页框,把旧数据拷贝一份(写时复制)
请求调页
因为进程开始运行时并不会访问其地址空间中的全部地址,有很多地址可能直到进程结束都不会访问,页框尽可能不存放当前不访问的数据(局部性原理导致预先准备的数据除外),这样从总体上能使系统有更大的吞吐量。一直推迟到进程要访问的页不在RAM中时为止,才引起缺页异常并发起请求调页。当然请求调页所引起的每个缺页异常必须由内核处理,这将浪费CPU周期。
被访问的页不在主存中,可能是因为进程从没访问过该页,或是内核收回了相应的页框。此时缺页处理程序必须为进程分配新的页框,如何初始化这个页框取决于哪一种页以前是否被进程访问过,由handle_pte_fault()函数判断:
- 该页从未被进程访问到,则页表对应的页表项为0
- 页属于非线性磁盘文件的映射,即present标志为0,dirty标志为1
- 进程已经访问过该页,内容被临时保存在磁盘上,页表项没被0填充,但present和dirty标志为0
书中在该章节只讨论了第一种情况,会接着调用do_no_page()。
/*
* do_no_page() tries to create a new page mapping. It aggressively
* tries to share with existing pages, but makes a separate copy if
* the "write_access" parameter is true in order to avoid the next
* page fault.
*
* As this is called only for pages that do not currently exist, we
* do not need to flush old virtual caches or the TLB.
*
* This is called with the MM semaphore held and the page table
* spinlock held. Exit with the spinlock released.
*/
static int
do_no_page(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long address, int write_access, pte_t *page_table, pmd_t *pmd)
{
struct page * new_page;
struct address_space *mapping = NULL;
pte_t entry;
int sequence = 0;
int ret = VM_FAULT_MINOR;
int anon = 0;
if (!vma->vm_ops || !vma->vm_ops->nopage)
return do_anonymous_page(mm, vma, page_table,
pmd, write_access, address);
pte_unmap(page_table);
spin_unlock(&mm->page_table_lock);
if (vma->vm_file) {
mapping = vma->vm_file->f_mapping;
sequence = atomic_read(&mapping->truncate_count);
}
smp_rmb(); /* Prevent CPU from reordering lock-free ->nopage() */
首先判断页是否被映射到一个磁盘文件,如果是的话需要将磁盘文件上需要的数据装入RAM。
在 Linux 内核中,vm_ops->nopage 是一个虚拟内存操作的回调函数,用于处理缺页异常。当进程访问一个尚未映射到物理内存的虚拟地址时,会触发缺页异常,此时内核会调用 vm_ops->nopage 函数来处理这个异常。此时vm_ops->nopage不为NULL,该函数对应从磁盘文件装入数据的函数。
如果vm_ops->nopage 或者 vm_ops为NULL,线性区没有映射磁盘文件,直接调用do_anonymous_page()获得一个新页框。拿到新的页框之后,do_no_page()之后会调用do_swap_page()把页框读入内存中。
写时复制
page明明在内存中,但是因为不可写导致了异常,就会进行写时复制,调用dp_wp_page()。
if (write_access) {
if (!pte_write(entry))
return do_wp_page(mm, vma, address, pte, pmd, entry);
entry = pte_mkdirty(entry);
}
写时复制(Copy-on-write,COW)。COW 将复制操作推迟到第一次写入时进行:在创建一个新副本时,不会立即复制资源,而是共享原始副本的资源;当修改时再执行复制操作。通过这种方式共享资源,可以显著减少创建副本时的开销,以及节省资源;同时,资源修改操作会增加少量开销。
Unix系统最初发出fork()系统调用时,会把父进程的整个地址空间全部复制并分配给子进程,开销非常大。很多情况下子进程只是读操作,大部分内容子进程都不关心,所以Linux采用了写时复制的方案:父进程和子进程共享页框,只要页框被共享,就不能修改,无论父进程还是子进程试图写一个共享页框,内核就把该页框复制到一个新的页框中并标记为可写。当其他进程尝试写该页框时,内核检查该进程是不是该页框的唯一主,是的话,就可写。
从https://imageslr.com/2020/copy-on-write.html博客上引用了一个图片来辅助说明。
处理非连续内存访问
一旦发现缺页的线性地址大于TASK_SIZE,就能判断是访问非连续线性地址导致的,因为内核初始化一结束,任何进程或内核线程都不会直接使用主内核页表。