Home Numactl
Post
Cancel

Numactl

numactl github: https://github.com/numactl/numactl

一 NUMA技术

1
2
3
4
5
6
7
8
https://blog.csdn.net/don_chiang709/article/details/100735052
NUMA技术将CPU划分成不同的组(Node),每个Node由多个CPU组成,并且有独立的本地内存、I/O等资源。Node之间通过互联模块连接和沟通,因此除了本地内存外,每个CPU仍可以访问远端Node的内存,只不过效率会比访问本地内存差一些,我们用Node之间的距离(Distance,抽象的概念)来定义各个Node之间互访资源的开销。
**Node->Socket->Core->Processor**
随着多核技术的发展,将多个CPU封装在一起,这个封装被称为插槽SocketCoresocket上独立的硬件单元;通过intel的超线程HT技术进一步提升CPU的处理能力,OS看到的逻辑上的核数Processor
**socket = node**
socket是物理概念,指的是主板上CPU插槽;node是逻辑概念,对应于socket
**core = 物理CPU**
core是物理概念,一个独立的硬件执行单元,对应于物理CPU
1
2
3
4
5
6
7
8
9
https://www.cnblogs.com/machangwei-8/p/10402644.html
由于SMP在扩展能力上的限制,人们开始探究如何进行有效地扩展从而构建大型系统的技术,NUMA就是这种努力下的结果之一。利用NUMA技术,可以把几十个CPU(甚至上百个CPU)组合在一个服务器内。NUMA服务器的基本特征是具有多个CPU模块,每个CPU模块由多个CPU(4)组成,并且具有独立的本地内存、I/O槽口等。由于其节点之间可以通过互联模块(如称为Crossbar Switch)进行连接和信息交互,因此每个CPU可以访问整个系统的内存(这是NUMA系统与MPP系统的重要差别)。显然,访问本地内存的速度将远远高于访问远地内存(系统内其它节点的内存)的速度,这也是非一致存储访问NUMA的由来。由于这个特点,为了更好地发挥系统性能,开发应用程序时需要尽量减少不同CPU模块之间的信息交互。利用NUMA技术,可以较好地解决原来SMP系统的扩展问题,在一个物理服务器内可以支持上百个CPU。比较典型的NUMA服务器的例子包括HPSuperdomeSUN15KIBMp690等。
每个CPU模块之间都是通过互联模块进行连接和信息交互,CPU都是互通互联的,同时,每个CPU模块平均划分为若干个Chip(不多于4个),每个Chip都有自己的内存控制器及内存插槽。
NUMA中还有三个节点的概念:
1)、本地节点:对于某个节点中的所有CPU,此节点称为本地节点。
2)、邻居节点:与本地节点相邻的节点称为邻居节点。
3)、远端节点:非本地节点或邻居节点的节点,称为远端节点。
4)、邻居节点和远端节点,都称作非本地节点(Off Node)
CPU访问不同类型节点内存的速度是不相同的,访问本地节点的速度最快,访问远端节点的速度最慢,即访问速度与节点的距离有关,距离越远访问速度越慢,此距离称作Node Distance。应用程序要尽量的减少不通CPU模块之间的交互,如果应用程序能有方法固定在一个CPU模块里,那么应用的性能将会有很大的提升。
1
2
https://www.xiexianbin.cn/linux/commands/numactl/index.html
numactl使用方法

二、 numactl使用的syscall

numactl用于设置进程的调度或内存绑定策略,所有子项继承设置的策略。此外,它还可以设置共享内存段或文件的内存策略。该工具可用于查看当前服务器的NUMA节点配置、状态,可通过该工具将进程绑定到指定CPU core,由指定CPU core来运行对应进程。

1
https://linux.die.net/man/

(1)set_mempolicy

设置进程的默认numa策略。

1
2
3
4
//set default NUMA memory policy for a process and its children
#include <numaif.h>
int set_mempolicy(int mode, unsigned long *nodemask, unsigned long maxnode);
Link with -lnuma.

(2)get_mempolicy

获取进程的numa策略。

1
2
3
4
//retrieves the NUMA policy of the calling thread or of a memory address, depending on the setting of flags.
#include <numaif.h>
int get_mempolicy(int *mode, unsigned long *nodemask, unsigned long maxnode, unsigned long addr, unsigned long flags);
Link with -lnuma.

(3)mbind

设置内存的内存策略。

1
2
3
4
//set memory policy for a memory range
#include <numaif.h>
int mbind(void *addr, unsigned long len, int mode, unsigned long *nodemask, unsigned long maxnode, unsigned flags);
Link with -lnuma.

(4)migrate_pages

迁移所有的page到另一个node上。

1
2
3
4
//move all pages in a process to another set of nodes
#include <numaif.h>
long migrate_pages(int pid, unsigned long maxnode, const unsigned long *old_nodes, const unsigned long *new_nodes);
Link with -lnuma.

(5)move_pages

迁移单个page到另一个node上。

1
2
3
4
//move individual pages of a process to another node
#include <numaif.h>
long move_pages(int pid, unsigned long count, void **pages, const int *nodes, int *status, int flags);
Link with -lnuma.

(6)sched_setaffinity/sched_getaffinity

设置/获取进程CPU亲和性Mask;

若pid = 0? calling process的CPU亲和性Mask?

1
2
3
//set and get a process's CPU affinity mask;If pid is zero, then the mask of the calling process is returned
int sched_setaffinity(pid_t pid, size_t cpusetsize, cpu_set_t *mask);
int sched_getaffinity(pid_t pid, size_t cpusetsize, cpu_set_t *mask);
1
2
3
4
5
6
numactl [--interleave nodes] [--preferred node] [--membind nodes] [--cpunodebind nodes] [--physcpubind cpus] [--localalloc] [--] {arguments ...}
numactl --show
numactl --hardware
numactl [--huge] [--offset offset] [--shmmode shmmode] [--length length] [--strict]
[--shmid id] --shm shmkeyfile | --file tmpfsfile
[--touch] [--dump] [--dump-nodes] memory policy

三、numactl命令行

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
usage: numactl [--all | -a] [--interleave= | -i <nodes>] [--preferred= | -p <node>]
               [--physcpubind= | -C <cpus>] [--cpunodebind= | -N <nodes>]
               [--membind= | -m <nodes>] [--localalloc | -l] command args ...
       numactl [--show | -s]
       numactl [--hardware | -H]
       numactl [--length | -l <length>] [--offset | -o <offset>] [--shmmode | -M <shmmode>]
               [--strict | -t]
               [--shmid | -I <id>] --shm | -S <shmkeyfile>
               [--shmid | -I <id>] --file | -f <tmpfsfile>
               [--huge | -u] [--touch | -T] 
               memory policy | --dump | -d | --dump-nodes | -D

memory policy is --interleave | -i, --preferred | -p, --membind | -m, --localalloc | -l
<nodes> is a comma delimited list of node numbers or A-B ranges or all.
Instead of a number a node can also be:
  netdev:DEV the node connected to network device DEV
  file:PATH  the node the block device of path is connected to
  ip:HOST    the node of the network device host routes through
  block:PATH the node of block device path
  pci:[seg:]bus:dev[:func] The node of a PCI device
<cpus> is a comma delimited list of cpu numbers or A-B ranges or all
all ranges can be inverted with !
all numbers and ranges can be made cpuset-relative with +
the old --cpubind argument is deprecated.
use --cpunodebind or --physcpubind instead
<length> can have g (GB), m (MB) or k (KB) suffixes
      
1 交织分配模式
使用 --interleave 参数,如占用内存的mongodb程序,共享所有 node 内存:

numactl --interleave=all mongod -f /etc/mongod.conf
也可参考 <Mongo Sharding 集群配置>中配置 rs 启动脚本

2 内存绑定
numactl --cpunodebind=0 --membind=0 python param
numactl --physcpubind=0 --membind=0 python param
3 CPU绑定
numactl -C 0-1 ./test
将应用程序test绑定到0~1核上运行
      
1.缺省(default):总是在本地节点分配(分配在当前进程运行的节点上);
2.绑定(bind):强制分配到指定节点上;
3.交叉(interleave):在所有节点或者指定的节点上交织分配;
4.优先(preferred):在指定节点上分配,失败则在其他节点上分配。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
[csluo@localhost node3]$ numactl -H
available: 4 nodes (0-3)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
node 0 size: 64847 MB
node 0 free: 6721 MB
node 1 cpus: 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63
node 1 size: 65465 MB
node 1 free: 5235 MB
node 2 cpus: 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95
node 2 size: 65465 MB
node 2 free: 5753 MB
node 3 cpus: 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127
node 3 size: 64440 MB
node 3 free: 19081 MB
node distances:
node   0   1   2   3 
  0:  10  16  32  33 
  1:  16  10  25  32 
  2:  32  25  10  16 
  3:  33  32  16  10 

    ->| nopolicy
    ->| hardware
      ->| numa_node_size64 ///sys/devices/system/node/node%d/meminfo 获取node的total/free
	/*----------------------------------------------------------------------*/
      ->| print_node_cpus
      	->| numa_node_to_cpus ///sys/devices/system/node/node%d/cpumap 获取node的cpumask
	/*----------------------------------------------------------------------*/
      ->| print_distances
      	->| numa_distance
      		->| read_distance_table ////sys/devices/system/node/node%d/distance 获取node与node的距离
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[csluo@localhost node3]$ numactl -s
policy: default
preferred node: current
physcpubind: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 
cpubind: 0 1 2 3 
nodebind: 0 1 2 3 
membind: 0 1 2 3 

    ->| show
    	->|numa_get_run_node_mask  //sched_getaffinity + /sys/devices/system/node/node%d/cpumap = affinity_node
    	->|numa_preferred //get_mempolicy(policy, &bmp) -> policy
		->|numa_get_interleave_mask
    	->|numa_get_membind //get_mempolicy(policy, &bmp) -> bmp
		->|numa_get_interleave_node


1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
void __attribute__((constructor))
numa_init
	->|set_sizes 
		/* </proc/self/status Mems_allowed:\tnum > nodemask_t的size,
		 * 即系统支持最大node数2^CONFIG_NODES_SHIFT
          */
        ->|set_nodemask_size();	/* size of kernel nodemask_t */

		/* 遍历</sys/devices/system/node> 
		 * [环境最大node号:maxconfigurednode],
		 * [环境所有的nodemask numa_nodes_ptr],
		 * [有可用内存的nodemask numa_memnode_ptr] 
		 */
        ->|set_configured_nodes();	/* configured nodes listed in /sys */

		/* sched_getaffinity [获取cpumask_t的size cpumask_sz], 
		 * 即系统支持最大CPU数CONFIG_NR_CPUS
          */
        ->|set_numa_max_cpu();	/* size of kernel cpumask_t */

		/* sysconf(_SC_NPROCESSORS_CONF) [获取最大配置CPU maxconfiguredcpu ] */
        ->|set_configured_cpus();	/* cpus listed in /sys/devices/system/cpu */

        /* </proc/self/status> 
         * [Cpus_allowed:  numa_all_cpus_ptr], 进程可用的CPU Mask
         * [Mems_allowed: numa_all_nodes_ptr], 进程可用的Node Mask
         * [maxconfigurednode->numa_possible_nodes_ptr], 根据最大node号(1 << maxconfigurednode)得到的possible NodeMask
         * [maxconfiguredcpu->numa_possible_cpus_ptr], 根据最大的CPU号(1 << maxconfiguredcpu)得的possible CpuMask 
         */
        ->|set_task_constraints(); /* cpus and nodes for current task */
1
2
3
4
//sysV share memory Segment
https://flylib.com/books/en/1.393.1.140/1/

Shared memory provides an efficient way to share large amounts of data between processes. Shared memory is one of the most important resources the IPC facility provides because it is heavily used in many database applications. A SysV shared memory segment is created by the shmget() system call. After the shared memory segment is created, a process can attach itself to the shared memory segment by issuing a shmat()system call. Then the process can perform operations (read or write) on it. The process can detach itself from the memory segment by a shmdt() system call. Because shared memory provides a common resource for multiple processes, it is often used with semaphores to prevent collisions.
1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
[root@localhost numactl]$ numactl -m 0 -C 0 top&
[1] 596490
 //查看进程的内存申请
[root@localhost numactl]$ cat /proc/597865/numa_maps | grep "N[0-9]" 
[csluo@localhost numactl]$ cat /proc/`pidof top`/numa_maps | grep "N[0-9]"
aaada7450000 bind:0 file=/usr/bin/top mapped=2 mapmax=2 N1=2 kernelpagesize_kB=64
aaada7470000 bind:0 file=/usr/bin/top anon=1 dirty=1 N0=1 kernelpagesize_kB=64
aaada7480000 bind:0 file=/usr/bin/top anon=1 dirty=1 N0=1 kernelpagesize_kB=64
aaada7490000 bind:0 anon=1 dirty=1 N0=1 kernelpagesize_kB=64
aaadc6d30000 bind:0 heap anon=3 dirty=3 N0=3 kernelpagesize_kB=64
fffc6dcb0000 bind:0 file=/usr/lib64/libnuma.so.1.0.0 mapped=1 mapmax=11 N1=1 kernelpagesize_kB=64
fffc6dcc0000 bind:0 file=/usr/lib64/libnuma.so.1.0.0 anon=1 dirty=1 N0=1 kernelpagesize_kB=64
fffc6dcd0000 bind:0 file=/usr/lib64/libnuma.so.1.0.0 anon=1 dirty=1 N0=1 kernelpagesize_kB=64
fffc6dd00000 bind:0 file=/usr/lib/locale/en_US.utf8/LC_CTYPE mapped=2 mapmax=77 N0=2 kernelpagesize_kB=64
fffc6dd60000 bind:0 file=/usr/lib/locale/en_US.utf8/LC_NUMERIC mapped=1 mapmax=76 N0=1 kernelpagesize_kB=64
fffc6dd70000 bind:0 file=/usr/lib/locale/en_US.utf8/LC_TIME mapped=1 mapmax=76 N0=1 kernelpagesize_kB=64
fffc6dd80000 bind:0 file=/usr/lib/locale/en_US.utf8/LC_COLLATE mapped=3 mapmax=76 N0=3 kernelpagesize_kB=64
fffc6e000000 bind:0 file=/usr/lib/locale/en_US.utf8/LC_MONETARY mapped=1 mapmax=76 N0=1 kernelpagesize_kB=64
fffc6e010000 bind:0 file=/usr/lib/locale/en_US.utf8/LC_MESSAGES/SYS_LC_MESSAGES mapped=1 mapmax=76 N0=1 kernelpagesize_kB=64
fffc6e020000 bind:0 file=/usr/lib/locale/en_US.utf8/LC_PAPER mapped=1 mapmax=76 N0=1 kernelpagesize_kB=64
fffc6e030000 bind:0 file=/usr/lib/locale/en_US.utf8/LC_NAME mapped=1 mapmax=76 N0=1 kernelpagesize_kB=64
fffc6e040000 bind:0 file=/usr/lib/locale/en_US.utf8/LC_ADDRESS mapped=1 mapmax=76 N0=1 kernelpagesize_kB=64
fffc6e050000 bind:0 file=/usr/lib/locale/en_US.utf8/LC_TELEPHONE mapped=1 mapmax=76 N0=1 kernelpagesize_kB=64
fffc6e060000 bind:0 file=/usr/lib64/libgpg-error.so.0.26.1 mapped=2 mapmax=67 N0=2 kernelpagesize_kB=64
fffc6e080000 bind:0 file=/usr/lib64/libgpg-error.so.0.26.1 anon=1 dirty=1 N0=1 kernelpagesize_kB=64
fffc6e090000 bind:0 file=/usr/lib64/libgpg-error.so.0.26.1 anon=1 dirty=1 N0=1 kernelpagesize_kB=64
fffc6e0a0000 bind:0 file=/usr/lib64/libgcc_s-7.3.0-20190804.so.1 mapped=1 mapmax=86 N0=1 kernelpagesize_kB=64
fffc6e0c0000 bind:0 file=/usr/lib64/libgcc_s-7.3.0-20190804.so.1 anon=1 dirty=1 N0=1 kernelpagesize_kB=64
fffc6e0d0000 bind:0 file=/usr/lib64/libgcc_s-7.3.0-20190804.so.1 anon=1 dirty=1 N0=1 kernelpagesize_kB=64
fffc6e0e0000 bind:0 file=/usr/lib64/libgcrypt.so.20.2.3 mapped=3 mapmax=67 N0=3 kernelpagesize_kB=64
fffc6e1a0000 bind:0 file=/usr/lib64/libgcrypt.so.20.2.3 anon=1 dirty=1 N0=1 kernelpagesize_kB=64
fffc6e1b0000 bind:0 file=/usr/lib64/libgcrypt.so.20.2.3 anon=1 dirty=1 N0=1 kernelpagesize_kB=64
fffc6e1c0000 bind:0 file=/usr/lib64/liblz4.so.1.9.2 mapped=1 mapmax=66 N0=1 kernelpagesize_kB=64
fffc6e200000 bind:0 file=/usr/lib64/liblz4.so.1.9.2 anon=1 dirty=1 N0=1 kernelpagesize_kB=64
fffc6e210000 bind:0 file=/usr/lib64/liblz4.so.1.9.2 anon=1 dirty=1 N0=1 kernelpagesize_kB=64
fffc6e220000 bind:0 file=/usr/lib64/liblzma.so.5.2.4 mapped=1 mapmax=67 N0=1 kernelpagesize_kB=64
fffc6e250000 bind:0 file=/usr/lib64/liblzma.so.5.2.4 anon=1 dirty=1 N0=1 kernelpagesize_kB=64
fffc6e260000 bind:0 file=/usr/lib64/liblzma.so.5.2.4 anon=1 dirty=1 N0=1 kernelpagesize_kB=64
fffc6e270000 bind:0 file=/usr/lib64/librt-2.28.so mapped=1 mapmax=71 N0=1 kernelpagesize_kB=64
fffc6e280000 bind:0 file=/usr/lib64/librt-2.28.so anon=1 dirty=1 N0=1 kernelpagesize_kB=64
fffc6e290000 bind:0 file=/usr/lib64/librt-2.28.so anon=1 dirty=1 N0=1 kernelpagesize_kB=64
fffc6e2a0000 bind:0 file=/usr/lib64/libpthread-2.28.so mapped=2 mapmax=128 N0=2 kernelpagesize_kB=64
fffc6e2c0000 bind:0 file=/usr/lib64/libpthread-2.28.so anon=1 dirty=1 N0=1 kernelpagesize_kB=64
fffc6e2d0000 bind:0 file=/usr/lib64/libpthread-2.28.so anon=1 dirty=1 N0=1 kernelpagesize_kB=64
fffc6e2e0000 bind:0 file=/usr/lib64/libc-2.28.so mapped=17 mapmax=198 N0=17 kernelpagesize_kB=64
fffc6e450000 bind:0 file=/usr/lib64/libc-2.28.so anon=1 dirty=1 N0=1 kernelpagesize_kB=64
fffc6e460000 bind:0 file=/usr/lib64/libc-2.28.so anon=1 dirty=1 N0=1 kernelpagesize_kB=64
fffc6e470000 bind:0 file=/usr/lib64/libdl-2.28.so mapped=1 mapmax=131 N0=1 kernelpagesize_kB=64
fffc6e480000 bind:0 file=/usr/lib64/libdl-2.28.so anon=1 dirty=1 N0=1 kernelpagesize_kB=64
fffc6e490000 bind:0 file=/usr/lib64/libdl-2.28.so anon=1 dirty=1 N0=1 kernelpagesize_kB=64
fffc6e4a0000 bind:0 file=/usr/lib64/libtinfo.so.6.1 mapped=3 mapmax=51 N0=3 kernelpagesize_kB=64
fffc6e4d0000 bind:0 file=/usr/lib64/libtinfo.so.6.1 anon=1 dirty=1 N0=1 kernelpagesize_kB=64
fffc6e4e0000 bind:0 file=/usr/lib64/libtinfo.so.6.1 anon=1 dirty=1 N0=1 kernelpagesize_kB=64
fffc6e4f0000 bind:0 file=/usr/lib64/libncurses.so.6.1 mapped=1 mapmax=3 N1=1 kernelpagesize_kB=64
fffc6e520000 bind:0 file=/usr/lib64/libncurses.so.6.1 anon=1 dirty=1 N0=1 kernelpagesize_kB=64
fffc6e530000 bind:0 file=/usr/lib64/libncurses.so.6.1 anon=1 dirty=1 N0=1 kernelpagesize_kB=64
fffc6e540000 bind:0 file=/usr/lib64/libsystemd.so.0.27.0 mapped=2 mapmax=57 N0=2 kernelpagesize_kB=64
fffc6e5f0000 bind:0 file=/usr/lib64/libsystemd.so.0.27.0 anon=1 dirty=1 N0=1 kernelpagesize_kB=64
fffc6e600000 bind:0 file=/usr/lib64/libsystemd.so.0.27.0 anon=1 dirty=1 N0=1 kernelpagesize_kB=64
fffc6e610000 bind:0 file=/usr/lib64/libprocps.so.8.0.2 mapped=1 mapmax=2 N1=1 kernelpagesize_kB=64
fffc6e630000 bind:0 file=/usr/lib64/libprocps.so.8.0.2 anon=1 dirty=1 N0=1 kernelpagesize_kB=64
fffc6e640000 bind:0 file=/usr/lib64/libprocps.so.8.0.2 anon=1 dirty=1 N0=1 kernelpagesize_kB=64
fffc6e650000 bind:0 anon=1 dirty=1 N0=1 kernelpagesize_kB=64
fffc6e660000 bind:0 file=/usr/lib/locale/en_US.utf8/LC_MEASUREMENT mapped=1 mapmax=76 N0=1 kernelpagesize_kB=64
fffc6e670000 bind:0 file=/usr/lib64/gconv/gconv-modules.cache mapped=1 mapmax=78 N0=1 kernelpagesize_kB=64
fffc6e6a0000 bind:0 file=/usr/lib64/ld-2.28.so mapped=2 mapmax=197 N0=2 kernelpagesize_kB=64
fffc6e6c0000 bind:0 file=/usr/lib/locale/en_US.utf8/LC_IDENTIFICATION mapped=1 mapmax=76 N0=1 kernelpagesize_kB=64
fffc6e6d0000 bind:0 file=/usr/lib64/ld-2.28.so anon=1 dirty=1 N0=1 kernelpagesize_kB=64
fffc6e6e0000 bind:0 file=/usr/lib64/ld-2.28.so anon=1 dirty=1 N0=1 kernelpagesize_kB=64
ffffd2cb0000 bind:0 stack anon=2 dirty=2 N0=2 kernelpagesize_kB=64
//查看program运行的cpu
[csluo@localhost numactl]$ cat /proc/`pidof top`/status |grep Cpus_allowed_list
Cpus_allowed_list:	0
[csluo@localhost numactl]$ ps -o pid,psr,comm -p `pidof top`
    PID PSR COMMAND
 599842   0 top
[csluo@localhost numactl]$ ps -o pid,psr,comm -p `pidof top`
    PID PSR COMMAND
 599842   0 top
[csluo@localhost numactl]$ ps -o pid,psr,comm -p `pidof top`
    PID PSR COMMAND
 599842   0 top

1
2
3
4
5
#0  0x0000ffffafe5db04 in strstr () from /usr/lib64/libc.so.6
(gdb) bt
#0  0x0000ffffafe5db04 in strstr () from /usr/lib64/libc.so.6
#1  0x0000000000401ed8 in add_pids_from_pattern_search (pattern=pattern@entry=0x0) at numastat.c:1319
#2  0x00000000004012e8 in main (argc=1, argv=0xffffdd7413c8) at numastat.c:1399
This post is licensed under CC BY 4.0 by the author.