svcadm(8)을 검색하려면 섹션에서 8 을 선택하고, 맨 페이지 이름에 svcadm을 입력하고 검색을 누른다.
lockstat(8)
System Administration Commands lockstat(8)
NAME
lockstat - report kernel lock and profiling statistics
SYNOPSIS
lockstat [-ACEHI] [-e event_list] [-i rate]
[-b | -t | -h | -s depth] [-n nrecords]
[-l lock [, size]] [-d duration]
[-f function [, size]] [-T] [-ckgwWRpP] [-D count]
[-o filename] [-x opt [=val]] command [args]
DESCRIPTION
The lockstat utility gathers and displays kernel locking and profiling
statistics. lockstat allows you to specify which events to watch (for
example, spin on adaptive mutex, block on read access to rwlock due to
waiting writers, and so forth) how much data to gather for each event,
and how to display the data. By default, lockstat monitors all lock
contention events, gathers frequency and timing data about those
events, and displays the data in decreasing frequency order, so that
the most common events appear first.
lockstat gathers data until the specified command completes. For exam‐
ple, to gather statistics for a fixed-time interval, use sleep(1) as
the command, as follows:
example# lockstat sleep 5
When the -I option is specified, lockstat establishes a per-processor
high-level periodic interrupt source to gather profiling data. The
interrupt handler simply generates a lockstat event whose caller is the
interrupted PC (program counter). The profiling event is just like any
other lockstat event, so all of the normal lockstat options are appli‐
cable.
lockstat relies on DTrace to modify the running kernel's text to inter‐
cept events of interest. This imposes a small but measurable overhead
on all system activity, so access to lockstat is restricted to super-
user by default. The system administrator can permit other users to use
lockstat by granting them additional DTrace privileges. Refer to the
Oracle Solaris 11.4 DTrace (Dynamic Tracing) Guide for more information
about DTrace security features.
OPTIONS
The following options are supported:
Event Selection
If no event selection options are specified, the default is -C.
-A
Watch all lock events. -A is equivalent to -CH.
-C
Watch contention events.
-E
Watch error events.
-e event_list
Only watch the specified events. event list is a comma-separated
list of events or ranges of events such as 1,4-7,35. Run lockstat
with no arguments to get a brief description of all events.
-H
Watch hold events.
-I
Watch profiling interrupt events.
-i rate
Interrupt rate (per second) for -I. The default is 97 Hz, so that
profiling doesn't run in lockstep with the clock interrupt (which
runs at 100 Hz).
Data Gathering
-x arg[=val]
Enable or modify a DTrace runtime option or D compiler option. The
list of options is found in dtrace(8). Boolean options are enabled
by specifying their name. Options with values are set by separating
the option name and value with an equals sign (=).
Data Gathering (Mutually Exclusive)
-b
Basic statistics: lock, caller, number of events.
-h
Histogram: Timing plus time-distribution histograms.
-s depth
Stack trace: Histogram plus stack traces up to depth frames deep.
-t
Timing: Basic plus timing for all events [default].
Data Filtering
-d duration
Only watch events longer than duration.
-f func[,size]
Only watch events generated by func, which can be specified as a
symbolic name or hex address. size defaults to the ELF symbol size
if available, or 1 if not.
-l lock[,size]
Only watch lock, which can be specified as a symbolic name or hex
address. size defaults to the ELF symbol size or 1 if the symbol
size is not available.
-n nrecords
Maximum number of data records.
-T
Trace (rather than sample) events [off by default].
Data Reporting
-c
Coalesce lock data for lock arrays (for example, pse_mutex[]).
-D count
Only display the top count events of each type.
-g
Show total events generated by function. For example, if foo()
calls bar() in a loop, the work done by bar() counts as work gener‐
ated by foo() (along with any work done by foo() itself). The -g
option works by counting the total number of stack frames in which
each function appears. This implies two things: (1) the data
reported by -g can be misleading if the stack traces are not deep
enough, and (2) functions that are called recursively might show
greater than 100% activity. In light of issue (1), the default data
gathering mode when using -g is -s 50.
-k
Coalesce PCs within functions.
-o filename
Direct output to filename.
-P
Sort data by (count * time) product.
-p
Parsable output format.
-R
Display rates (events per second) rather than counts.
-W
Whichever: distinguish events only by caller, not by lock.
-w
Wherever: distinguish events only by lock, not by caller.
DISPLAY FORMATS
The following headers appear over various columns of data.
Count or ops/s
Number of times this event occurred, or the rate (times per second)
if -R was specified.
indv
Percentage of all events represented by this individual event.
genr
Percentage of all events generated by this function.
cuml
Cumulative percentage; a running total of the individuals.
rcnt
Average reference count. This will always be 1 for exclusive locks
(mutexes, spin locks, rwlocks held as writer) but can be greater
than 1 for shared locks (rwlocks held as reader).
nsec
Average duration of the events in nanoseconds, as appropriate for
the event. For the profiling event, duration means interrupt
latency.
Lock
Address of the lock; displayed symbolically if possible.
CPU+PIL
CPU plus processor interrupt level (PIL). For example, if CPU 4 is
interrupted while at PIL 6, this will be reported as cpu[4]+6.
Caller
Address of the caller; displayed symbolically if possible.
EXAMPLES
Example 1 Measuring Kernel Lock Contention
example# lockstat sleep 5
Adaptive mutex spin: 2210 events in 5.055 seconds (437 events/sec)
Count indv cuml rcnt nsec Lock Caller
------------------------------------------------------------------------
269 12% 12% 1.00 2160 service_queue background+0xdc
249 11% 23% 1.00 86 service_queue qenable_locked+0x64
228 10% 34% 1.00 131 service_queue background+0x15c
68 3% 37% 1.00 79 0x30000024070 untimeout+0x1c
59 3% 40% 1.00 384 0x300066fa8e0 background+0xb0
43 2% 41% 1.00 30 rqcred_lock svc_getreq+0x3c
42 2% 43% 1.00 341 0x30006834eb8 background+0xb0
41 2% 45% 1.00 135 0x30000021058 untimeout+0x1c
40 2% 47% 1.00 39 rqcred_lock svc_getreq+0x260
37 2% 49% 1.00 2372 0x300068e83d0 hmestart+0x1c4
36 2% 50% 1.00 77 0x30000021058 timeout_common+0x4
36 2% 52% 1.00 354 0x300066fa120 background+0xb0
32 1% 53% 1.00 97 0x30000024070 timeout_common+0x4
31 1% 55% 1.00 2923 0x300069883d0 hmestart+0x1c4
29 1% 56% 1.00 366 0x300066fb290 background+0xb0
28 1% 57% 1.00 117 0x3000001e040 untimeout+0x1c
25 1% 59% 1.00 93 0x3000001e040 timeout_common+0x4
22 1% 60% 1.00 25 0x30005161110 sync_stream_buf+0xdc
21 1% 60% 1.00 291 0x30006834eb8 putq+0xa4
19 1% 61% 1.00 43 0x3000515dcb0 mdf_alloc+0xc
18 1% 62% 1.00 456 0x30006834eb8 qenable+0x8
18 1% 63% 1.00 61 service_queue queuerun+0x168
17 1% 64% 1.00 268 0x30005418ee8 vmem_free+0x3c
[...]
R/W reader blocked by writer: 76 events in 5.055 seconds (15 events/sec)
Count indv cuml rcnt nsec Lock Caller
------------------------------------------------------------------------
23 30% 30% 1.00 22590137 0x300098ba358 ufs_dirlook+0xd0
17 22% 53% 1.00 5820995 0x3000ad815e8 find_bp+0x10
13 17% 70% 1.00 2639918 0x300098ba360 ufs_iget+0x198
4 5% 75% 1.00 3193015 0x300098ba360 ufs_getattr+0x54
3 4% 79% 1.00 7953418 0x3000ad817c0 find_bp+0x10
3 4% 83% 1.00 935211 0x3000ad815e8 find_read_lof+0x14
2 3% 86% 1.00 16357310 0x300073a4720 find_bp+0x10
2 3% 88% 1.00 2072433 0x300073a4720 find_read_lof+0x14
2 3% 91% 1.00 1606153 0x300073a4370 find_bp+0x10
1 1% 92% 1.00 2656909 0x300107e7400 ufs_iget+0x198
[...]
Example 2 Measuring Hold Times
example# lockstat -H -D 10 sleep 1
Adaptive mutex spin: 513 events
Count indv cuml rcnt nsec Lock Caller
-------------------------------------------------------------------------
480 5% 5% 1.00 1136 0x300007718e8 putnext+0x40
286 3% 9% 1.00 666 0x3000077b430 getf+0xd8
271 3% 12% 1.00 537 0x3000077b430 msgio32+0x2fc
270 3% 15% 1.00 3670 0x300007718e8 strgetmsg+0x3d4
270 3% 18% 1.00 1016 0x300007c38b0 getq_noenab+0x200
264 3% 20% 1.00 1649 0x300007718e8 strgetmsg+0xa70
216 2% 23% 1.00 6251 tcp_mi_lock tcp_snmp_get+0xfc
206 2% 25% 1.00 602 thread_free_lock clock+0x250
138 2% 27% 1.00 485 0x300007c3998 putnext+0xb8
138 2% 28% 1.00 3706 0x300007718e8 strrput+0x5b8
-------------------------------------------------------------------------
[...]
Example 3 Measuring Hold Times for Stack Traces Containing a Specific
Function
example# lockstat -H -f so_recvmsg -s 50 -D 10 sleep 10
Adaptive mutex hold: 171792 events in 10.089 seconds (17028 events/sec)
-------------------------------------------------------------------------------
Count indv cuml rcnt nsec Lock Caller
48140 28% 28% 0.00 1109 0x184004c76ade8 so_recvmsg+0x8a0
nsec ------ Time Distribution ------ count Stack
1024 | 991 socket_recvmsg+0x28
2048 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 47120 socket_vop_read+0x20
4096 | 28 fop_read+0xe8
8192 | 0 read+0x2d0
16384 | 0 syscall_trap+0x238
32768 | 0
65536 | 0
131072 | 1
[...]
-------------------------------------------------------------------------------
Count indv cuml rcnt nsec Lock Caller
37756 22% 100% 0.00 1076 0x18400555603e0 so_recvmsg+0x2bc
nsec ------ Time Distribution ------ count Stack
1024 |@@@@@ 7519 socket_recvmsg+0x28
2048 |@@@@@@@@@@@@@@@@@@@@@@@ 30193 socket_vop_read+0x20
4096 | 41 fop_read+0xe8
8192 | 3 read+0x2d0
syscall_trap+0x238
[...]
Example 4 Basic Kernel Profiling
For basic profiling, we don't care whether the profiling interrupt sam‐
pled foo()+0x4c or foo()+0x78; we care only that it sampled somewhere
in foo(), so we use -k. The CPU and PIL aren't relevant to basic pro‐
filing because we are measuring the system as a whole, not a particular
CPU or interrupt level, so we use -W.
example# lockstat -kIW -D 20 ./polltest
Profiling interrupt: 82 events in 0.424 seconds (194
events/sec)
Count indv cuml rcnt nsec Hottest CPU+PIL Caller
-----------------------------------------------------------------------
8 10% 10% 1.00 698 cpu[1] utl0
6 7% 17% 1.00 299 cpu[0] read
5 6% 23% 1.00 124 cpu[1] getf
4 5% 28% 1.00 327 cpu[0] fifo_read
4 5% 33% 1.00 112 cpu[1] poll
4 5% 38% 1.00 212 cpu[1] uiomove
4 5% 43% 1.00 361 cpu[1] mutex_tryenter
3 4% 46% 1.00 682 cpu[0] write
3 4% 50% 1.00 89 cpu[0] pcache_poll
3 4% 54% 1.00 118 cpu[1] set_active_fd
3 4% 57% 1.00 105 cpu[0] syscall_trap32
3 4% 61% 1.00 640 cpu[1] (usermode)
2 2% 63% 1.00 127 cpu[1] fifo_poll
2 2% 66% 1.00 300 cpu[1] fifo_write
2 2% 68% 1.00 669 cpu[0] releasef
2 2% 71% 1.00 112 cpu[1] bt_getlowbit
2 2% 73% 1.00 247 cpu[1] splx
2 2% 76% 1.00 503 cpu[0] mutex_enter
2 2% 78% 1.00 467 cpu[0]+10 disp_lock_enter
2 2% 80% 1.00 139 cpu[1] default_copyin
-----------------------------------------------------------------------
[...]
Example 5 Generated-load Profiling
In the example above, 5% of the samples were in poll(). This tells us
how much time was spent inside poll() itself, but tells us nothing
about how much work was generated by poll(); that is, how much time we
spent in functions called by poll(). To determine that, we use the -g
option. The example below shows that although polltest spends only 5%
of its time in poll() itself, poll()-induced work accounts for 34% of
the load.
Note that the functions that generate the profiling interrupt (lock‐
stat_intr(), cyclic_fire(), and so forth) appear in every stack trace,
and therefore are considered to have generated 100% of the load. This
illustrates an important point: the generated load percentages do not
add up to 100% because they are not independent. If 72% of all stack
traces contain both foo() and bar(), then both foo() and bar() are 72%
load generators.
example# lockstat -kgIW -D 20 ./polltest
Profiling interrupt: 80 events in 0.412 seconds (194 events/sec)
Count genr cuml rcnt nsec Hottest CPU+PIL Caller
-------------------------------------------------------------------------
80 100% ---- 1.00 310 cpu[1] lockstat_intr
80 100% ---- 1.00 310 cpu[1] cyclic_fire
80 100% ---- 1.00 310 cpu[1] cbe_level14
80 100% ---- 1.00 310 cpu[1] current_thread
27 34% ---- 1.00 176 cpu[1] poll
20 25% ---- 1.00 221 cpu[0] write
19 24% ---- 1.00 249 cpu[1] read
17 21% ---- 1.00 232 cpu[0] write32
17 21% ---- 1.00 207 cpu[1] pcache_poll
14 18% ---- 1.00 319 cpu[0] fifo_write
13 16% ---- 1.00 214 cpu[1] read32
10 12% ---- 1.00 208 cpu[1] fifo_read
10 12% ---- 1.00 787 cpu[1] utl0
9 11% ---- 1.00 178 cpu[0] pcacheset_resolve
9 11% ---- 1.00 262 cpu[0] uiomove
7 9% ---- 1.00 506 cpu[1] (usermode)
5 6% ---- 1.00 195 cpu[1] fifo_poll
5 6% ---- 1.00 136 cpu[1] syscall_trap32
4 5% ---- 1.00 139 cpu[0] releasef
3 4% ---- 1.00 277 cpu[1] polllock
-------------------------------------------------------------------------
[...]
Example 6 Gathering Lock Contention and Profiling Data for a Specific
Module
In this example we use the -f option not to specify a single function,
but rather to specify the entire text space of the procfs module. We
gather both lock contention and profiling statistics so that contention
can be correlated with overall load on the module.
example# modinfo | grep procfs
# modinfo | grep procfs
9 10b0a698 1b068 5 1 procfs (filesystem for proc)
example# lockstat -kICE -f 0x10b0a698,0x1b068 sleep 10
Adaptive mutex spin: 16 events in 10.042 seconds (2 events/sec)
Count indv cuml rcnt nsec Lock Caller
-------------------------------------------------------------------------------
15 94% 94% 0.00 2158 pr_pidlock pr_p_lock_i
1 6% 100% 0.00 3060 pr_pidlock prinactive
-------------------------------------------------------------------------------
Profiling interrupt: 4 events in 10.033 seconds (0 events/sec)
Count indv cuml rcnt nsec CPU+PIL Caller
-------------------------------------------------------------------------------
1 25% 25% 0.00 588 cpu[108]+11 prchoose
1 25% 50% 0.00 492 cpu[108] prchoose
1 25% 75% 0.00 225 cpu[50]+11 prchoose
1 25% 100% 0.00 1785 cpu[50] prchoose
-------------------------------------------------------------------------------
Example 7 Determining the Average PIL (processor interrupt level) for a
CPU
example# lockstat -Iw -l cpu[3] ./testprog
Profiling interrupt: 14791 events in 152.463 seconds (97 events/sec)
Count indv cuml rcnt nsec CPU+PIL Hottest Caller
-----------------------------------------------------------------------
13641 92% 92% 1.00 253 cpu[3] (usermode)
579 4% 96% 1.00 325 cpu[3]+6 ip_ocsum+0xe8
375 3% 99% 1.00 411 cpu[3]+10 splx
154 1% 100% 1.00 527 cpu[3]+4 fas_intr_svc+0x80
41 0% 100% 1.00 293 cpu[3]+13 send_mondo+0x18
1 0% 100% 1.00 266 cpu[3]+12 zsa_rxint+0x400
-----------------------------------------------------------------------
[...]
Example 8 Determining which Subsystem is Causing the System to be Busy
example# lockstat -s 10 -I sleep 20
Profiling interrupt: 4863 events in 47.375 seconds (103 events/sec)
Count indv cuml rcnt nsec CPU+PIL Caller
787 8% 8% 0.00 1953 cpu[112] default_mutex_lock_delay+0x6c
nsec ------ Time Distribution ------ count Stack
512 |@ 31 default_mutex_lock_delay+0x64
1024 |@@@@@@@@@@@@@@@@@ 460 lock_set_spl_spin+0xcc
2048 |@@@@@@@@@@ 277 turnstile_lookup+0x48
4096 | 19 rw_enter_sleep+0xcc
sdev_access+0x10
sdev_lookup+0x20
fop_lookup+0x15c
lookuppnvp+0x49c
lookuppnatcred+0x110
-----------------------------------------------------------------------
[...]
ATTRIBUTES
See attributes(7) for descriptions of the following attributes:
tab() box; cw(2.75i) |cw(2.75i) lw(2.75i) |lw(2.75i) ATTRIBUTE TYPEAT‐
TRIBUTE VALUE _ Availabilitysystem/dtrace
SEE ALSO
lockstat(4D), attributes(7), dtrace(8), plockstat(8), mutex(9F),
rwlock(9F)
Oracle Solaris 11.4 DTrace (Dynamic Tracing) Guide
NOTES
The profiling support provided by lockstat -I replaces the old (and
undocumented) /usr/bin/kgmon and /dev/profile.
Tail-call elimination can affect call sites. For example, if foo()+0x50
calls bar() and the last thing bar() does is call mutex_exit(), the
compiler can arrange for bar() to branch to mutex_exit()with a return
address of foo()+0x58. Thus, the mutex_exit() in bar() will appear as
though it occurred at foo()+0x58.
The PC in the stack frame in which an interrupt occurs can be bogus
because, between function calls, the compiler is free to use the return
address register for local storage.
When using the -I and -s options together, the interrupted PC will usu‐
ally not appear anywhere in the stack since the interrupt handler is
entered asynchronously, not by a function call from that PC.
The lockstat technology is provided on an as-is basis. The format and
content of lockstat output reflect the current Solaris kernel implemen‐
tation and are therefore subject to change in future releases.
Oracle Solaris 11.4 6 Sept 2019 lockstat(8)