man.bsd.lv manual page server

Manual Page Search Parameters
PMC.WESTMERE(3) Library Functions Manual PMC.WESTMERE(3)

pmc.westmeremeasurement events for Intel Westmere family CPUs

library “libpmc”

#include <pmc.h>

Intel Westmere CPUs contain PMCs conforming to version 2 of the Intel performance measurement architecture. These CPUs may contain up to three classes of PMCs:
Fixed-function counters that count only one hardware event per counter.
Programmable counters that may be configured to count one of a defined set of hardware events.

The number of PMCs available in each class and their widths need to be determined at run time by calling pmc_cpuinfo(3).

Intel Westmere PMCs are documented in Volume 3B: System Programming Guide, Part 2, Intel(R) 64 and IA-32 Architectures Software Developes Manual, Order Number: 253669-033US, Intel Corporation, December 2009.

These PMCs and their supported events are documented in pmc.iaf(3).

The programmable PMCs support the following capabilities:

PMC_CAP_CASCADE No
PMC_CAP_EDGE Yes
PMC_CAP_INTERRUPT Yes
PMC_CAP_INVERT Yes
PMC_CAP_READ Yes
PMC_CAP_PRECISE No
PMC_CAP_SYSTEM Yes
PMC_CAP_TAGGING No
PMC_CAP_THRESHOLD Yes
PMC_CAP_USER Yes
PMC_CAP_WRITE Yes

Event specifiers for these PMCs support the following common qualifiers:

value
Configure the Off-core Response bits.
Counts the number of demand and DCU prefetch data reads of full and partial cachelines as well as demand data page table entry cacheline reads. Does not count L2 data read prefetches or instruction fetches.
Counts the number of demand and DCU prefetch reads for ownership (RFO) requests generated by a write to data cacheline. Does not count L2 RFO.
Counts the number of demand and DCU prefetch instruction cacheline reads. Does not count L2 code read prefetches. WB Counts the number of writeback (modified to exclusive) transactions.
Counts the number of data cacheline reads generated by L2 prefetchers.
Counts the number of RFO requests generated by L2 prefetchers.
Counts the number of code reads generated by L2 prefetchers.
Counts one of the following transaction types, including L3 invalidate, I/O, full or partial writes, WC or non-temporal stores, CLFLUSH, Fences, lock, unlock, split lock.
L3 Hit: local or remote home requests that hit L3 cache in the uncore with no coherency actions required (snooping).
L3 Hit: local or remote home requests that hit L3 cache in the uncore and was serviced by another core with a cross core snoop where no modified copies were found (clean).
L3 Hit: local or remote home requests that hit L3 cache in the uncore and was serviced by another core with a cross core snoop where modified copies were found (HITM).
L3 Miss: local homed requests that missed the L3 cache and was serviced by forwarded data following a cross package snoop where no modified copies found. (Remote home requests are not counted)
L3 Miss: remote home requests that missed the L3 cache and were serviced by remote DRAM.
L3 Miss: local home requests that missed the L3 cache and were serviced by local DRAM.
Non-DRAM requests that were serviced by IOH.
value
Configure the PMC to increment only if the number of configured events measured in a cycle is greater than or equal to value.
Configure the PMC to count the number of de-asserted to asserted transitions of the conditions expressed by the other qualifiers. If specified, the counter will increment only once whenever a condition becomes true, irrespective of the number of clocks during which the condition remains true.
Invert the sense of comparison when the “cmask” qualifier is present, making the counter increment when the number of events per cycle is less than the value specified by the “cmask” qualifier.
Configure the PMC to count events happening at processor privilege level 0.
Configure the PMC to count events occurring at privilege levels 1, 2 or 3.

If neither of the “os” or “usr” qualifiers are specified, the default is to enable both.

Westmere programmable PMCs support the following events:

(Event 03H, Umask 02H) Loads that partially overlap an earlier store
(Event 04H, Umask 07H) All Store buffer stall cycles
(Event 05H, Umask 02H) All store referenced with misaligned address
(Event 06H, Umask 04H) Counts number of loads delayed with at-Retirement block code. The following loads need to be executed at retirement and wait for all senior stores on the same thread to be drained: load splitting across 4K boundary (page split), load accessing uncacheable (UC or USWC) memory, load lock, and load with page table in UC or USWC memory region.
(Event 06H, Umask 08H) Cacheable loads delayed with L1D block code
(Event 07H, Umask 01H) Counts false dependency due to partial address aliasing
(Event 08H, Umask 01H) Counts all load misses that cause a page walk
(Event 08H, Umask 02H) Counts number of completed page walks due to load miss in the STLB.
(Event 08H, Umask 04H) Cycles PMH is busy with a page walk due to a load miss in the STLB.
(Event 08H, Umask 10H) Number of cache load STLB hits
(Event 08H, Umask 20H) Number of DTLB cache load misses where the low part of the linear to physical address translation was missed.
(Event 0BH, Umask 01H) Counts the number of instructions with an architecturally-visible store retired on the architected path. In conjunction with ld_lat facility
(Event 0BH, Umask 02H) Counts the number of instructions with an architecturally-visible store retired on the architected path. In conjunction with ld_lat facility
(Event 0BH, Umask 10H) Counts the number of instructions exceeding the latency specified with ld_lat facility. In conjunction with ld_lat facility
(Event 0CH, Umask 01H) The event counts the number of retired stores that missed the DTLB. The DTLB miss is not counted if the store operation causes a fault. Does not counter prefetches. Counts both primary and secondary misses to the TLB
(Event 0EH, Umask 01H) Counts the number of Uops issued by the Register Allocation Table to the Reservation Station, i.e. the UOPs issued from the front end to the back end.
(Event 0EH, Umask 01H) Counts the number of cycles no Uops issued by the Register Allocation Table to the Reservation Station, i.e. the UOPs issued from the front end to the back end. set invert=1, cmask = 1
(Event 0EH, Umask 02H) Counts the number of fused Uops that were issued from the Register Allocation Table to the Reservation Station.
(Event 0FH, Umask 02H) Load instructions retired that HIT modified data in sibling core (Precise Event)
(Event 0FH, Umask 08H) Load instructions retired local dram and remote cache HIT data sources (Precise Event)
(Event 0FH, Umask 10H) Load instructions retired with a data source of local DRAM or locally homed remote cache HITM (Precise Event)
(Event 0FH, Umask 20H) Load instructions retired remote DRAM and remote home-remote cache HITM (Precise Event)
(Event 0FH, Umask 80H) Load instructions retired I/O (Precise Event)
(Event 10H, Umask 01H) Counts the number of FP Computational Uops Executed. The number of FADD, FSUB, FCOM, FMULs, integer MULsand IMULs, FDIVs, FPREMs, FSQRTS, integer DIVs, and IDIVs. This event does not distinguish an FADD used in the middle of a transcendental flow from a separate FADD instruction.
(Event 10H, Umask 02H) Counts number of MMX Uops executed.
(Event 10H, Umask 04H) Counts number of SSE and SSE2 FP uops executed.
(Event 10H, Umask 08H) Counts number of SSE2 integer uops executed.
(Event 10H, Umask 10H) Counts number of SSE FP packed uops executed.
(Event 10H, Umask 20H) Counts number of SSE FP scalar uops executed.
(Event 10H, Umask 40H) Counts number of SSE* FP single precision uops executed.
(Event 10H, Umask 80H) Counts number of SSE* FP double precision uops executed.
(Event 12H, Umask 01H) Counts number of 128 bit SIMD integer multiply operations.
(Event 12H, Umask 02H) Counts number of 128 bit SIMD integer shift operations.
(Event 12H, Umask 04H) Counts number of 128 bit SIMD integer pack operations.
(Event 12H, Umask 08H) Counts number of 128 bit SIMD integer unpack operations.
(Event 12H, Umask 10H) Counts number of 128 bit SIMD integer logical operations.
(Event 12H, Umask 20H) Counts number of 128 bit SIMD integer arithmetic operations.
(Event 12H, Umask 40H) Counts number of 128 bit SIMD integer shuffle and move operations.
(Event 13H, Umask 01H) Counts number of loads dispatched from the Reservation Station that bypass the Memory Order Buffer.
(Event 13H, Umask 02H) Counts the number of delayed RS dispatches at the stage latch. If an RS dispatch can not bypass to LB, it has another chance to dispatch from the one-cycle delayed staging latch before it is written into the LB.
(Event 13H, Umask 04H) Counts the number of loads dispatched from the Reservation Station to the Memory Order Buffer.
(Event 13H, Umask 07H) Counts all loads dispatched from the Reservation Station.
(Event 14H, Umask 01H) Counts the number of cycles the divider is busy executing divide or square root operations. The divide can be integer, X87 or Streaming SIMD Extensions (SSE). The square root operation can be either X87 or SSE. Set 'edge =1, invert=1, cmask=1' to count the number of divides. Count may be incorrect When SMT is on
(Event 14H, Umask 02H) Counts the number of multiply operations executed. This includes integer as well as floating point multiply operations but excludes DPPS mul and MPSAD. Count may be incorrect When SMT is on
(Event 17H, Umask 01H) Counts the number of instructions written into the instruction queue every cycle.
(Event 18H, Umask 01H) Counts number of instructions that require decoder 0 to be decoded. Usually, this means that the instruction maps to more than 1 uop
(Event 19H, Umask 01H) An instruction that generates two uops was decoded
(Event 1EH, Umask 01H) This event counts the number of cycles during which instructions are written to the instruction queue. Dividing this counter by the number of instructions written to the instruction queue (INST_QUEUE_WRITES) yields the average number of instructions decoded each cycle. If this number is less than four and the pipe stalls, this indicates that the decoder is failing to decode enough instructions per cycle to sustain the 4-wide pipeline. If SSE* instructions that are 6 bytes or longer arrive one after another, then front end throughput may limit execution speed. In such case,
(Event 20H, Umask 01H) Number of loops that can not stream from the instruction queue.
(Event 24H, Umask 01H) Counts number of loads that hit the L2 cache. L2 loads include both L1D demand misses as well as L1D prefetches. L2 loads can be rejected for various reasons. Only non rejected loads are counted.
(Event 24H, Umask 02H) Counts the number of loads that miss the L2 cache. L2 loads include both L1D demand misses as well as L1D prefetches.
(Event 24H, Umask 03H) Counts all L2 load requests. L2 loads include both L1D demand misses as well as L1D prefetches.
(Event 24H, Umask 04H) Counts the number of store RFO requests that hit the L2 cache. L2 RFO requests include both L1D demand RFO misses as well as L1D RFO prefetches. Count includes WC memory requests, where the data is not fetched but the permission to write the line is required.
(Event 24H, Umask 08H) Counts the number of store RFO requests that miss the L2 cache. L2 RFO requests include both L1D demand RFO misses as well as L1D RFO prefetches.
(Event 24H, Umask 0CH) Counts all L2 store RFO requests. L2 RFO requests include both L1D demand RFO misses as well as L1D RFO prefetches.
(Event 24H, Umask 10H) Counts number of instruction fetches that hit the L2 cache. L2 instruction fetches include both L1I demand misses as well as L1I instruction prefetches.
(Event 24H, Umask 20H) Counts number of instruction fetches that miss the L2 cache. L2 instruction fetches include both L1I demand misses as well as L1I instruction prefetches.
(Event 24H, Umask 30H) Counts all instruction fetches. L2 instruction fetches include both L1I demand misses as well as L1I instruction prefetches.
(Event 24H, Umask 40H) Counts L2 prefetch hits for both code and data.
(Event 24H, Umask 80H) Counts L2 prefetch misses for both code and data.
(Event 24H, Umask C0H) Counts all L2 prefetches for both code and data.
(Event 24H, Umask AAH) Counts all L2 misses for both code and data.
(Event 24H, Umask FFH) Counts all L2 requests for both code and data.
(Event 26H, Umask 01H) Counts number of L2 data demand loads where the cache line to be loaded is in the I (invalid) state, i.e. a cache miss. L2 demand loads are both L1D demand misses and L1D prefetches.
(Event 26H, Umask 02H) Counts number of L2 data demand loads where the cache line to be loaded is in the S (shared) state. L2 demand loads are both L1D demand misses and L1D prefetches.
(Event 26H, Umask 04H) Counts number of L2 data demand loads where the cache line to be loaded is in the E (exclusive) state. L2 demand loads are both L1D demand misses and L1D prefetches.
(Event 26H, Umask 08H) Counts number of L2 data demand loads where the cache line to be loaded is in the M (modified) state. L2 demand loads are both L1D demand misses and L1D prefetches.
(Event 26H, Umask 0FH) Counts all L2 data demand requests. L2 demand loads are both L1D demand misses and L1D prefetches.
(Event 26H, Umask 10H) Counts number of L2 prefetch data loads where the cache line to be loaded is in the I (invalid) state, i.e. a cache miss.
(Event 26H, Umask 20H) Counts number of L2 prefetch data loads where the cache line to be loaded is in the S (shared) state. A prefetch RFO will miss on an S state line, while a prefetch read will hit on an S state line.
(Event 26H, Umask 40H) Counts number of L2 prefetch data loads where the cache line to be loaded is in the E (exclusive) state.
(Event 26H, Umask 80H) Counts number of L2 prefetch data loads where the cache line to be loaded is in the M (modified) state.
(Event 26H, Umask F0H) Counts all L2 prefetch requests.
(Event 26H, Umask FFH) Counts all L2 data requests.
(Event 27H, Umask 01H) Counts number of L2 demand store RFO requests where the cache line to be loaded is in the I (invalid) state, i.e, a cache miss. The L1D prefetcher does not issue a RFO prefetch. This is a demand RFO request
(Event 27H, Umask 02H) Counts number of L2 store RFO requests where the cache line to be loaded is in the S (shared) state. The L1D prefetcher does not issue a RFO prefetch. This is a demand RFO request.
(Event 27H, Umask 08H) Counts number of L2 store RFO requests where the cache line to be loaded is in the M (modified) state. The L1D prefetcher does not issue a RFO prefetch. This is a demand RFO request.
(Event 27H, Umask 0EH) Counts number of L2 store RFO requests where the cache line to be loaded is in either the S, E or M states. The L1D prefetcher does not issue a RFO prefetch. This is a demand RFO request
(Event 27H, Umask 0FH) Counts all L2 store RFO requests.The L1D prefetcher does not issue a RFO prefetch. This is a demand RFO request.
(Event 27H, Umask 10H) Counts number of L2 demand lock RFO requests where the cache line to be loaded is in the I (invalid) state, i.e. a cache miss.
(Event 27H, Umask 20H) Counts number of L2 lock RFO requests where the cache line to be loaded is in the S (shared) state.
(Event 27H, Umask 40H) Counts number of L2 demand lock RFO requests where the cache line to be loaded is in the E (exclusive) state.
(Event 27H, Umask 80H) Counts number of L2 demand lock RFO requests where the cache line to be loaded is in the M (modified) state.
(Event 27H, Umask E0H) Counts number of L2 demand lock RFO requests where the cache line to be loaded is in either the S, E, or M state.
(Event 27H, Umask F0H) Counts all L2 demand lock RFO requests.
(Event 28H, Umask 01H) Counts number of L1 writebacks to the L2 where the cache line to be written is in the I (invalid) state, i.e. a cache miss.
(Event 28H, Umask 02H) Counts number of L1 writebacks to the L2 where the cache line to be written is in the S state.
(Event 28H, Umask 04H) Counts number of L1 writebacks to the L2 where the cache line to be written is in the E (exclusive) state.
(Event 28H, Umask 08H) Counts number of L1 writebacks to the L2 where the cache line to be written is in the M (modified) state.
(Event 28H, Umask 0FH) Counts all L1 writebacks to the L2.
(Event 2EH, Umask 02H) Counts uncore Last Level Cache references. Because cache hierarchy, cache sizes and other implementation-specific characteristics; value comparison to estimate performance differences is not recommended. See Table A-1.
(Event 2EH, Umask 01H) Counts uncore Last Level Cache misses. Because cache hierarchy, cache sizes and other implementation-specific characteristics; value comparison to estimate performance differences is not recommended. See Table A-1.
(Event 3CH, Umask 00H) Counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling. see Table A-1
(Event 3CH, Umask 01H) Increments at the frequency of TSC when not halted. see Table A-1
(Event 49H, Umask 01H) Counts the number of misses in the STLB which causes a page walk.
(Event 49H, Umask 02H) Counts number of misses in the STLB which resulted in a completed page walk.
(Event 49H, Umask 04H) Counts cycles of page walk due to misses in the STLB.
(Event 49H, Umask 10H) Counts the number of DTLB first level misses that hit in the second level TLB. This event is only relevant if the core contains multiple DTLB levels.
(Event 49H, Umask 80H) Counts number of completed large page walks due to misses in the STLB.
(Event 4CH, Umask 01H) Counts load operations sent to the L1 data cache while a previous SSE prefetch instruction to the same cache line has started prefetching but has not yet finished.
(Event 4EH, Umask 01H) Counts number of hardware prefetch requests dispatched out of the prefetch FIFO.
(Event 4EH, Umask 02H) Counts number of hardware prefetch requests that miss the L1D. There are two prefetchers in the L1D. A streamer, which predicts lines sequentially after this one should be fetched, and the IP prefetcher that remembers access patterns for the current instruction. The streamer prefetcher stops on an L1D hit, while the IP prefetcher does not.
(Event 4EH, Umask 04H) Counts number of prefetch requests triggered by the Finite State Machine and pushed into the prefetch FIFO. Some of the prefetch requests are dropped due to overwrites or competition between the IP index prefetcher and streamer prefetcher. The prefetch FIFO contains 4 entries.
(Event 4FH, Umask 10H) Counts Extended Page walk cycles.
(Event 51H, Umask 01H) Counts the number of lines brought into the L1 data cache. Counter 0, 1 only.
(Event 51H, Umask 02H) Counts the number of modified lines brought into the L1 data cache. Counter 0, 1 only.
(Event 51H, Umask 04H) Counts the number of modified lines evicted from the L1 data cache due to replacement. Counter 0, 1 only.
(Event 51H, Umask 08H) Counts the number of modified lines evicted from the L1 data cache due to snoop HITM intervention. Counter 0, 1 only
(Event 52H, Umask 01H) Counts the number of cacheable load lock speculated instructions accepted into the fill buffer.
(Event 53H, Umask 01H) Counts the number of cacheable load lock speculated or retired instructions accepted into the fill buffer.
(Event 60H, Umask 01H) Counts weighted cycles of offcore demand data read requests. Does not include L2 prefetch requests. Counter 0.
(Event 60H, Umask 02H) Counts weighted cycles of offcore demand code read requests. Does not include L2 prefetch requests. Counter 0.
(Event 60H, Umask 04H) Counts weighted cycles of offcore demand RFO requests. Does not include L2 prefetch requests. Counter 0.
(Event 60H, Umask 08H) Counts weighted cycles of offcore read requests of any kind. Include L2 prefetch requests. Counter 0.
(Event 63H, Umask 01H) Cycle count during which the L1D and L2 are locked. A lock is asserted when there is a locked memory access, due to uncacheable memory, a locked operation that spans two cache lines, or a page walk from an uncacheable page table. Counter 0, 1 only. L1D and L2 locks have a very high performance penalty and it is highly recommended to avoid such accesses.
(Event 63H, Umask 02H) Counts the number of cycles that cacheline in the L1 data cache unit is locked. Counter 0, 1 only.
(Event 6CH, Umask 01H) Counts the number of completed I/O transactions.
(Event 80H, Umask 01H) Counts all instruction fetches that hit the L1 instruction cache.
(Event 80H, Umask 02H) Counts all instruction fetches that miss the L1I cache. This includes instruction cache misses, streaming buffer misses, victim cache misses and uncacheable fetches. An instruction fetch miss is counted only once and not once for every cycle it is outstanding.
(Event 80H, Umask 03H) Counts all instruction fetches, including uncacheable fetches that bypass the L1I.
(Event 80H, Umask 04H) Cycle counts for which an instruction fetch stalls due to a L1I cache miss, ITLB miss or ITLB fault.
(Event 82H, Umask 01H) Counts number of large ITLB hits.
(Event 85H, Umask 01H) Counts the number of misses in all levels of the ITLB which causes a page walk.
(Event 85H, Umask 02H) Counts number of misses in all levels of the ITLB which resulted in a completed page walk.
(Event 85H, Umask 04H) Counts ITLB miss page walk cycles.
(Event 85H, Umask 80H) Counts number of completed large page walks due to misses in the STLB.
(Event 87H, Umask 01H) Cycles Instruction Length Decoder stalls due to length changing prefixes: 66, 67 or REX.W (for EM64T) instructions which change the length of the decoded instruction.
(Event 87H, Umask 02H) Instruction Length Decoder stall cycles due to Brand Prediction Unit (PBU) Most Recently Used (MRU) bypass.
(Event 87H, Umask 04H) Stall cycles due to a full instruction queue.
(Event 87H, Umask 08H) Counts the number of regen stalls.
(Event 87H, Umask 0FH) Counts any cycles the Instruction Length Decoder is stalled.
(Event 88H, Umask 01H) Counts the number of conditional near branch instructions executed, but not necessarily retired.
(Event 88H, Umask 02H) Counts all unconditional near branch instructions excluding calls and indirect branches.
(Event 88H, Umask 04H) Counts the number of executed indirect near branch instructions that are not calls.
(Event 88H, Umask 07H) Counts all non call near branch instructions executed, but not necessarily retired.
(Event 88H, Umask 08H) Counts indirect near branches that have a return mnemonic.
(Event 88H, Umask 10H) Counts unconditional near call branch instructions, excluding non call branch, executed.
(Event 88H, Umask 20H) Counts indirect near calls, including both register and memory indirect, executed.
(Event 88H, Umask 30H) Counts all near call branches executed, but not necessarily retired.
(Event 88H, Umask 40H) Counts taken near branches executed, but not necessarily retired.
(Event 88H, Umask 7FH) Counts all near executed branches (not necessarily retired). This includes only instructions and not micro-op branches. Frequent branching is not necessarily a major performance issue. However frequent branch mispredictions may be a problem.
(Event 89H, Umask 01H) Counts the number of mispredicted conditional near branch instructions executed, but not necessarily retired.
(Event 89H, Umask 02H) Counts mispredicted macro unconditional near branch instructions, excluding calls and indirect branches (should always be 0).
(Event 89H, Umask 04H) Counts the number of executed mispredicted indirect near branch instructions that are not calls.
(Event 89H, Umask 07H) Counts mispredicted non call near branches executed, but not necessarily retired.
(Event 89H, Umask 08H) Counts mispredicted indirect branches that have a rear return mnemonic.
(Event 89H, Umask 10H) Counts mispredicted non-indirect near calls executed, (should always be 0).
(Event 89H, Umask 20H) Counts mispredicted indirect near calls executed, including both register and memory indirect.
(Event 89H, Umask 30H) Counts all mispredicted near call branches executed, but not necessarily retired.
(Event 89H, Umask 40H) Counts executed mispredicted near branches that are taken, but not necessarily retired.
(Event 89H, Umask 7FH) Counts the number of mispredicted near branch instructions that were executed, but not necessarily retired.
(Event A2H, Umask 01H) Counts the number of Allocator resource related stalls. Includes register renaming buffer entries, memory buffer entries. In addition to resource related stalls, this event counts some other events. Includes stalls arising during branch misprediction recovery, such as if retirement of the mispredicted branch is delayed and stalls arising while store buffer is draining from synchronizing operations. Does not include stalls due to SuperQ (off core) queue full, too many cache misses, etc.
(Event A2H, Umask 02H) Counts the cycles of stall due to lack of load buffer for load operation.
(Event A2H, Umask 04H) This event counts the number of cycles when the number of instructions in the pipeline waiting for execution reaches the limit the processor can handle. A high count of this event indicates that there are long latency operations in the pipe (possibly load and store operations that miss the L2 cache, or instructions dependent upon instructions further down the pipeline that have yet to retire. When RS is full, new instructions can not enter the reservation station and start execution.
(Event A2H, Umask 08H) This event counts the number of cycles that a resource related stall will occur due to the number of store instructions reaching the limit of the pipeline, (i.e. all store buffers are used). The stall ends when a store instruction commits its data to the cache or memory.
(Event A2H, Umask 10H) Counts the cycles of stall due to re- order buffer full.
(Event A2H, Umask 20H) Counts the number of cycles while execution was stalled due to writing the floating-point unit (FPU) control word.
(Event A2H, Umask 40H) Stalls due to the MXCSR register rename occurring to close to a previous MXCSR rename. The MXCSR provides control and status for the MMX registers.
(Event A2H, Umask 80H) Counts the number of cycles while execution was stalled due to other resource issues.
(Event A6H, Umask 01H) Counts the number of instructions decoded that are macro-fused but not necessarily executed or retired.
(Event A7H, Umask 01H) Counts number of times a BACLEAR was forced by the Instruction Queue. The IQ is also responsible for providing conditional branch prediction direction based on a static scheme and dynamic data provided by the L2 Branch Prediction Unit. If the conditional branch target is not found in the Target Array and the IQ predicts that the branch is taken, then the IQ will force the Branch Address Calculator to issue a BACLEAR. Each BACLEAR asserted by the BAC generates approximately an 8 cycle bubble in the instruction fetch pipeline.
(Event A8H, Umask 01H) Counts the number of micro-ops delivered by loop stream detector Use cmask=1 and invert to count cycles
(Event AEH, Umask 01H) Counts the number of ITLB flushes
(Event B0H, Umask 01H) Counts number of offcore demand data read requests. Does not count L2 prefetch requests.
(Event B0H, Umask 02H) Counts number of offcore demand code read requests. Does not count L2 prefetch requests.
(Event B0H, Umask 04H) Counts number of offcore demand RFO requests. Does not count L2 prefetch requests.
(Event B0H, Umask 08H) Counts number of offcore read requests. Includes L2 prefetch requests.
(Event 80H, Umask 10H) Counts number of offcore RFO requests. Includes L2 prefetch requests.
(Event B0H, Umask 40H) Counts number of L1D writebacks to the uncore.
(Event B0H, Umask 80H) Counts all offcore requests.
(Event B1H, Umask 01H) Counts number of Uops executed that were issued on port 0. Port 0 handles integer arithmetic, SIMD and FP add Uops.
(Event B1H, Umask 02H) Counts number of Uops executed that were issued on port 1. Port 1 handles integer arithmetic, SIMD, integer shift, FP multiply and FP divide Uops.
(Event B1H, Umask 04H) Counts number of Uops executed that were issued on port 2. Port 2 handles the load Uops. This is a core count only and can not be collected per thread.
(Event B1H, Umask 08H) Counts number of Uops executed that were issued on port 3. Port 3 handles store Uops. This is a core count only and can not be collected per thread.
(Event B1H, Umask 10H) Counts number of Uops executed that where issued on port 4. Port 4 handles the value to be stored for the store Uops issued on port 3. This is a core count only and can not be collected per thread.
(Event B1H, Umask 1FH) Counts number of cycles there are one or more uops being executed and were issued on ports 0-4. This is a core count only and can not be collected per thread.
(Event B1H, Umask 20H) Counts number of Uops executed that where issued on port 5.
(Event B1H, Umask 3FH) Counts number of cycles there are one or more uops being executed on any ports. This is a core count only and can not be collected per thread.
(Event B1H, Umask 40H) Counts number of Uops executed that where issued on port 0, 1, or 5. Use cmask=1, invert=1 to count stall cycles.
(Event B1H, Umask 80H) Counts number of Uops executed that where issued on port 2, 3, or 4.
(Event B2H, Umask 01H) Counts number of cycles the SQ is full to handle off-core requests.
(Event B3H, Umask 01H) Counts weighted cycles of snoopq requests for data. Counter 0 only Use cmask=1 to count cycles not empty.
(Event B3H, Umask 02H) Counts weighted cycles of snoopq invalidate requests. Counter 0 only. Use cmask=1 to count cycles not empty.
(Event B3H, Umask 04H) Counts weighted cycles of snoopq requests for code. Counter 0 only. Use cmask=1 to count cycles not empty.
(Event B4H, Umask 01H) Counts the number of snoop code requests.
(Event B4H, Umask 02H) Counts the number of snoop data requests.
(Event B4H, Umask 04H) Counts the number of snoop invalidate requests
(Event B7H, Umask 01H) see Section 30.6.1.3, Off-core Response Performance Monitoring in the Processor Core. Requires programming MSR 01A6H.
(Event B8H, Umask 01H) Counts HIT snoop response sent by this thread in response to a snoop request.
(Event B8H, Umask 02H) Counts HIT E snoop response sent by this thread in response to a snoop request.
(Event B8H, Umask 04H) Counts HIT M snoop response sent by this thread in response to a snoop request.
(Event BBH, Umask 01H) see Section 30.6.1.3, Off-core Response Performance Monitoring in the Processor Core. Use MSR 01A7H.
(Event C0H, Umask 01H) See Table A-1 Notes: INST_RETIRED.ANY is counted by a designated fixed counter. INST_RETIRED.ANY_P is counted by a programmable counter and is an architectural performance event. Event is supported if CPUID.A.EBX[1] = 0. Counting: Faulting executions of GETSEC/VM entry/VM Exit/MWait will not count as retired instructions.
(Event C0H, Umask 02H) Counts the number of floating point computational operations retired floating point computational operations executed by the assist handler and sub-operations of complex floating point instructions like transcendental instructions.
(Event C0H, Umask 04H) Counts the number of retired: MMX instructions.
(Event C2H, Umask 01H) Counts the number of micro-ops retired, (macro-fused=1, micro- fused=2, others=1; maximum count of 8 per cycle). Most instructions are composed of one or two micro-ops. Some instructions are decoded into longer sequences such as repeat instructions, floating point transcendental instructions, and assists. Use cmask=1 and invert to count active cycles or stalled cycles
(Event C2H, Umask 02H) Counts the number of retirement slots used each cycle
(Event C2H, Umask 04H) Counts number of macro-fused uops retired.
(Event C3H, Umask 01H) Counts the cycles machine clear is asserted.
(Event C3H, Umask 02H) Counts the number of machine clears due to memory order conflicts.
(Event C3H, Umask 04H) Counts the number of times that a program writes to a code section. Self-modifying code causes a sever penalty in all Intel 64 and IA-32 processors. The modified cache line is written back to the L2 and L3caches.
(Event C4H, Umask 00H) See Table A-1.
(Event C4H, Umask 01H) Counts the number of conditional branch instructions retired.
(Event C4H, Umask 02H) Counts the number of direct & indirect near unconditional calls retired.
(Event C4H, Umask 04H) Counts the number of branch instructions retired.
(Event C5H, Umask 00H) See Table A-1.
(Event C5H, Umask 01H) Counts mispredicted conditional retired calls.
(Event C5H, Umask 02H) Counts mispredicted direct & indirect near unconditional retired calls.
(Event C5H, Umask 04H) Counts all mispredicted retired calls.
(Event C7H, Umask 01H) Counts SIMD packed single-precision floating point Uops retired.
(Event C7H, Umask 02H) Counts SIMD calar single-precision floating point Uops retired.
(Event C7H, Umask 04H) Counts SIMD packed double- precision floating point Uops retired.
(Event C7H, Umask 08H) Counts SIMD scalar double-precision floating point Uops retired.
(Event C7H, Umask 10H) Counts 128-bit SIMD vector integer Uops retired.
(Event C8H, Umask 20H) Counts the number of retired instructions that missed the ITLB when the instruction was fetched.
(Event CBH, Umask 01H) Counts number of retired loads that hit the L1 data cache.
(Event CBH, Umask 02H) Counts number of retired loads that hit the L2 data cache.
(Event CBH, Umask 04H) Counts number of retired loads that hit their own, unshared lines in the L3 cache.
(Event CBH, Umask 08H) Counts number of retired loads that hit in a sibling core's L2 (on die core). Since the L3 is inclusive of all cores on the package, this is an L3 hit. This counts both clean or modified hits.
(Event CBH, Umask 10H) Counts number of retired loads that miss the L3 cache. The load was satisfied by a remote socket, local memory or an IOH.
(Event CBH, Umask 40H) Counts number of retired loads that miss the L1D and the address is located in an allocated line fill buffer and will soon be committed to cache. This is counting secondary L1D misses.
(Event CBH, Umask 80H) Counts the number of retired loads that missed the DTLB. The DTLB miss is not counted if the load operation causes a fault. This event counts loads from cacheable memory only. The event does not count loads by software prefetches. Counts both primary and secondary misses to the TLB.
(Event CCH, Umask 01H) Counts the first floating-point instruction following any MMX instruction. You can use this event to estimate the penalties for the transitions between floating-point and MMX technology states.
(Event CCH, Umask 02H) Counts the first MMX instruction following a floating-point instruction. You can use this event to estimate the penalties for the transitions between floating-point and MMX technology states.
(Event CCH, Umask 03H) Counts all transitions from floating point to MMX instructions and from MMX instructions to floating point instructions. You can use this event to estimate the penalties for the transitions between floating-point and MMX technology states.
(Event D0H, Umask 01H) Counts the number of instructions decoded, (but not necessarily executed or retired).
(Event D1H, Umask 01H) Counts the cycles of decoder stalls.
(Event D1H, Umask 02H) Counts the number of Uops decoded by the Microcode Sequencer, MS. The MS delivers uops when the instruction is more than 4 uops long or a microcode assist is occurring.
(Event D1H, Umask 04H) Counts number of stack pointer (ESP) instructions decoded: push , pop , call , ret, etc. ESP instructions do not generate a Uop to increment or decrement ESP. Instead, they update an ESP_Offset register that keeps track of the delta to the current value of the ESP register.
(Event D1H, Umask 08H) Counts number of stack pointer (ESP) sync operations where an ESP instruction is corrected by adding the ESP offset register to the current value of the ESP register.
(Event D2H, Umask 01H) Counts the number of cycles during which execution stalled due to several reasons, one of which is a partial flag register stall. A partial register stall may occur when two conditions are met: 1) an instruction modifies some, but not all, of the flags in the flag register and 2) the next instruction, which depends on flags, depends on flags that were not modified by this instruction.
(Event D2H, Umask 02H) This event counts the number of cycles instruction execution latency became longer than the defined latency because the instruction used a register that was partially written by previous instruction.
(Event D2H, Umask 04H) Counts the number of cycles when ROB read port stalls occurred, which did not allow new micro-ops to enter the out-of-order pipeline. Note that, at this stage in the pipeline, additional stalls may occur at the same cycle and prevent the stalled micro-ops from entering the pipe. In such a case, micro-ops retry entering the execution pipe in the next cycle and the ROB-read port stall is counted again.
(Event D2H, Umask 08H) Counts the cycles where we stall due to microarchitecturally required serialization. Microcode scoreboarding stalls.
(Event D2H, Umask 0FH) Counts all Register Allocation Table stall cycles due to: Cycles when ROB read port stalls occurred, which did not allow new micro-ops to enter the execution pipe. Cycles when partial register stalls occurred Cycles when flag stalls occurred Cycles floating-point unit (FPU) status word stalls occurred. To count each of these conditions separately use the events: RAT_STALLS.ROB_READ_PORT, RAT_STALLS.PARTIAL, RAT_STALLS.FLAGS, and RAT_STALLS.FPSW.
(Event D4H, Umask 01H) Counts the number of stall cycles due to the lack of renaming resources for the ES, DS, FS, and GS segment registers. If a segment is renamed but not retired and a second update to the same segment occurs, a stall occurs in the front- end of the pipeline until the renamed segment retires.
(Event D5H, Umask 01H) Counts the number of times the ES segment register is renamed.
(Event DBH, Umask 01H) Counts unfusion events due to floating point exception to a fused uop.
(Event E0H, Umask 01H) Counts the number of branch instructions decoded.
(Event E5H, Umask 01H) Counts number of times the Branch Prediction Unit missed predicting a call or return branch.
(Event E6H, Umask 01H) Counts the number of times the front end is resteered, mainly when the Branch Prediction Unit cannot provide a correct prediction and this is corrected by the Branch Address Calculator at the front end. This can occur if the code has many branches such that they cannot be consumed by the BPU. Each BACLEAR asserted by the BAC generates approximately an 8 cycle bubble in the instruction fetch pipeline. The effect on total execution time depends on the surrounding code.
(Event E6H, Umask 02H) Counts number of Branch Address Calculator clears (BACLEAR) asserted due to conditional branch instructions in which there was a target hit but the direction was wrong. Each BACLEAR asserted by the BAC generates approximately an 8 cycle bubble in the instruction fetch pipeline.
(Event E8H, Umask 01H) Counts early (normal) Branch Prediction Unit clears: BPU predicted a taken branch after incorrectly assuming that it was not taken. The BPU clear leads to 2 cycle bubble in the Front End.
(Event E8H, Umask 02H) Counts late Branch Prediction Unit clears due to Most Recently Used conflicts. The PBU clear leads to a 3 cycle bubble in the Front End.
(Event ECH, Umask 01H) Counts cycles threads are active.
(Event F0H, Umask 01H) Counts L2 load operations due to HW prefetch or demand loads.
(Event F0H, Umask 02H) Counts L2 RFO operations due to HW prefetch or demand RFOs.
(Event F0H, Umask 04H) Counts L2 instruction fetch operations due to HW prefetch or demand ifetch.
(Event F0H, Umask 08H) Counts L2 prefetch operations.
(Event F0H, Umask 10H) Counts L1D writeback operations to the L2.
(Event F0H, Umask 20H) Counts L2 cache line fill operations due to load, RFO, L1D writeback or prefetch.
(Event F0H, Umask 40H) Counts L2 writeback operations to the L3.
(Event F0H, Umask 80H) Counts all L2 cache operations.
(Event F1H, Umask 02H) Counts the number of cache lines allocated in the L2 cache in the S (shared) state.
(Event F1H, Umask 04H) Counts the number of cache lines allocated in the L2 cache in the E (exclusive) state.
(Event F1H, Umask 07H) Counts the number of cache lines allocated in the L2 cache.
(Event F2H, Umask 01H) Counts L2 clean cache lines evicted by a demand request.
(Event F2H, Umask 02H) Counts L2 dirty (modified) cache lines evicted by a demand request.
(Event F2H, Umask 04H) Counts L2 clean cache line evicted by a prefetch request.
(Event F2H, Umask 08H) Counts L2 modified cache line evicted by a prefetch request.
(Event F2H, Umask 0FH) Counts all L2 cache lines evicted for any reason.
(Event F4H, Umask 04H) Counts number of Super Queue LRU hints sent to L3.
(Event F4H, Umask 10H) Counts the number of SQ lock splits across a cache line.
(Event F6H, Umask 01H) Counts cycles the Super Queue is full. Neither of the threads on this core will be able to access the uncore.
(Event F7H, Umask 01H) Counts the number of floating point operations executed that required micro-code assist intervention. Assists are required in the following cases: SSE instructions, (Denormal input when the DAZ flag is off or Underflow result when the FTZ flag is off): x87 instructions, (NaN or denormal are loaded to a register or used as input from memory, Division by 0 or Underflow output).
(Event F7H, Umask 02H) Counts number of floating point micro-code assist when the output value (destination register) is invalid.
(Event F7H, Umask 04H) Counts number of floating point micro-code assist when the input value (one of the source operands to an FP instruction) is invalid.
(Event FDH, Umask 01H) Counts number of SID integer 64 bit packed multiply operations.
(Event FDH, Umask 02H) Counts number of SID integer 64 bit packed shift operations.
(Event FDH, Umask 04H) Counts number of SID integer 64 bit pack operations.
(Event FDH, Umask 08H) Counts number of SID integer 64 bit unpack operations.
(Event FDH, Umask 10H) Counts number of SID integer 64 bit logical operations.
(Event FDH, Umask 20H) Counts number of SID integer 64 bit arithmetic operations.
(Event FDH, Umask 40H) Counts number of SID integer 64 bit shift or move operations.

pmc(3), pmc.atom(3), pmc.core(3), pmc.corei7(3), pmc.corei7uc(3), pmc.iaf(3), pmc.k7(3), pmc.k8(3), pmc.p4(3), pmc.p5(3), pmc.p6(3), pmc.soft(3), pmc.tsc(3), pmc.ucf(3), pmc.westmereuc(3), pmc_cpuinfo(3), pmclog(3), hwpmc(4)

The pmc library first appeared in FreeBSD 6.0.

The library “libpmc” library was written by Joseph Koshy <jkoshy@FreeBSD.org>.

February 25, 2012 FreeBSD-12.0