This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: What *is* the API for sched_getaffinity? Should sched_getaffinity always succeed when using cpu_set_t?


(7/17/13 3:03 PM), KOSAKI Motohiro wrote:
(7/17/13 6:05 AM), chrubis@suse.cz wrote:
Hi!
The use of threads or processes with shared memory allows an application to take advantage of all
the processing power a system can provide. If the task can be parallelized the optimal way to >write an application is to have at any time as many processes running as there are processors.
To determine the number of processors available to the system one can run

       sysconf (_SC_NPROCESSORS_CONF)

which returns the number of processors the operating system configured. But it might be possible >for the operating system to disable individual processors and so the call

        sysconf (_SC_NPROCESSORS_ONLN)

returns the number of processors which are currently online (i.e., available).


So, I doubt we should use /sys/devices/system/cpu/possible for _SC_NPROCESSORS_CONF. But "system one can run" seems a bit unclear and I'm not 100% sure we should do. Do anyone know the purpose and intention of _SC_NPROCESSORS_CONF?

It looks like the whole subjects is confusing, and kernel was patched to
make this counting even work see 4d658d13c90f14cf3510ca15cafe2f4aa9e23d64.

Sorry, I don't know tile specific. I told Linux generic thing.


But I'm not 100% sure what the change did but it looks to me like this
changed kernel to create cpu sys entires for all sockets on the bus
rather than only for present ones.

Please look at /sys/devices/system/cpu/possible on your machine. It may be
different from /sys/devices/system/cpu/cpuX.

Example, my x86 machine show:

$ cat /sys/devices/system/cpu/possible
0-31

$ find  /sys/devices/system/cpu -maxdepth  1 -name 'cpu[0-9]*'
/sys/devices/system/cpu/cpu0
/sys/devices/system/cpu/cpu1
/sys/devices/system/cpu/cpu2
/sys/devices/system/cpu/cpu3
/sys/devices/system/cpu/cpu4
/sys/devices/system/cpu/cpu5
/sys/devices/system/cpu/cpu6
/sys/devices/system/cpu/cpu7

And, this is match w/ /sys/devices/system/cpu/present.

Again, the definition is,

possible cpus: maximum cpus on the system. It never be changed even if hotplug is happen.
               It doesn't care cpus are equipped or not.on x86, ACPI table describe
               supported cpus and socket.
               possible_cpus kernel boot option can tweak this.

online cpus: physically equipped and enabled by kernel. If you use maxcpus kernel boot option,
             this doesn't match w/ present cpus (or, of course, cpu hotplug)

present cpus: only account equipped cpus. But it doesn't care enabled or disabled.


In the other word, /sys/.../online is unsafe both physical cpu hotremove and logical cpu offline
(i.e. using "echo 0 > /sys/.../cpu/cpuX/online"). /sys/.../present is unsafe only physical hot
remove.

btw, qemu community is now developing cpu hotplug feature, then you can use it on virtual
environment in the feature.




Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]