How to troubleshoot performance

Added by Peter Braun over 2 years ago

Hello,

we have Nexenta running on SuperMicro QuadCore E5520 with 12GB RAM, SC826E1 (single SAS expander) with 8x2TB WD RE4 7.2k RPM.

SAN acts as iSCSI VM storage for opensource xen.

We host around 100VM currently.

Even with 10Gb network for iSCSI we sluggish writes like 30MB/s from VMs. We tried to add mirror of Intel SSD X25E as ZIL but not difference.

There are 2 OCZ Vertex 2 as cache.

What steps should we take to troubleshoot the performance?

Thanks

Peter


Replies

RE: How to troubleshoot performance - Added by Jeff Gibson over 2 years ago

Do you have dedup turned on? What does iostat -xCne 2 show while this is going on? How are your drives laid out (zpool status )? Have you done any local benchmarks on the nexenta system with dd to and from the disks? Does that appear to have the proper speed? Have you tried with the ZIL disabled (don't do this with live/production data or you may be sorry)?

I know there us usually concern around using SAS expanders with SATA disks, but I'm still of the opinion that if everything is working within specs there shouldn't be an issue...

RE: How to troubleshoot performance - Added by Peter Braun over 2 years ago

Hello,

dedup is turned off.

Here is iostat:

root@nx:/export/home/admin# iostat -xCne
extended device statistics       ---- errors ---
r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot device
7.9  146.4 13458.0 7278.2  0.0  2.4    0.0   15.5   0 144   0  0  0  0 c0
8.7    8.9  904.0  536.0  0.0  0.1    0.0    6.5   0   7   0  0  0  0 c0t5000C5001065807Bd0
8.6    9.0  888.8  536.0  0.0  0.1    0.0    6.0   0   6   0   0   0   0 c0t5000C500141090AFd0
123.7   14.6 1468.2  779.6  0.0  0.3    0.0    2.0   0  17   0   0   0   0 c0t50014EE057ADBFC5d0
123.5   14.6 1464.5  779.5  0.0  0.3    0.0    2.0   0  16   0   0   0   0 c0t50014EE0AD036CD7d0
123.7   14.6 1469.6  779.6  0.0  0.3    0.0    2.0   0  17   0   0   0   0 c0t50014EE002587CC2d0
124.2   14.5 1488.8  779.9  0.0  0.3    0.0    2.0   0  17   0   0   0   0 c0t50014EE0AD037437d0
123.5   14.6 1464.4  779.6  0.0  0.3    0.0    2.0   0  17   0   0   0   0 c0t50014EE057ADFADCd0
123.7   14.6 1469.5  779.6  0.0  0.3    0.0    2.0   0  17   0   0   0   0 c0t50014EE057ADCA40d0
2.3    6.6  125.1   57.5  0.0  0.0    0.0    2.1   0   1   0   0   0   0 c0t5000C500125559D3d0
2.1    6.6  116.0   57.5  0.0  0.0    0.0    2.2   0   1   0   0   0   0 c0t5000C500125523A3d0
123.6   14.6 1464.8  779.5  0.0  0.3    0.0    2.0   0  17   0   0   0   0 c0t50014EE002B35BE0d0
108.5   25.9 1282.2  948.1  0.0  0.2    0.0    1.8   0  17   0   0   0   0 c0t50014EE0AD08B9C5d0
34.6    4.4  419.6  363.4  0.0  0.0    0.4    0.4   0   1   0   0   0   0 c5d0
8.2    0.9   92.7   64.0  0.0  0.0    0.1    0.3   0   0   0   0   0   0 c5d1
34.7    4.4  419.1  363.2  0.0  0.0    0.4    0.4   0   1   0   0   0   0 c4d0
8.2    0.9   92.6   64.1  0.0  0.0    0.1    0.3   0   0   0   0   0   0 c4d1
114.9   13.4 1364.8  719.1  0.0  0.3    0.0    2.0   0  15   0 213 300 513 sd10


root@nx:/export/home/admin# zpool status
pool: BIG
state: ONLINE
scan: resilvered 356G in 14h13m with 0 errors on Mon Mar 12 01:30:56 2012
config:
NAME                       STATE     READ WRITE CKSUM
BIG                        ONLINE       0     0     0
mirror-0                 ONLINE       0     0     0
c0t50014EE002587CC2d0  ONLINE       0     0     0
c0t50014EE057ADBFC5d0  ONLINE       0     0     0
mirror-1                 ONLINE       0     0     0
c0t50014EE057ADCA40d0  ONLINE       0     0     0
c0t50014EE057ADFADCd0  ONLINE       0     0     0
mirror-2                 ONLINE       0     0     0
c0t50014EE002B35BE0d0  ONLINE       0     0     0
c0t50014EE0AD036CD7d0  ONLINE       0     0     0
mirror-3                 ONLINE       0     0     0
c0t50014EE0AD037437d0  ONLINE       0     0     0
c0t50014EE0AD08B9C5d0  ONLINE       0     0     0
cache
c4d1                     ONLINE       0     0     0
c5d1                     ONLINE       0     0     0
errors: No known data errors
pool: TB
state: ONLINE
scan: none requested
config:
NAME                       STATE     READ WRITE CKSUM
TB                         ONLINE       0     0     0
mirror-0                 ONLINE       0     0     0
c0t5000C5001065807Bd0  ONLINE       0     0     0
c0t5000C500141090AFd0  ONLINE       0     0     0
errors: No known data errors
pool: syspool
state: ONLINE
scan: scrub repaired 0 in 0h5m with 0 errors on Sun Apr  1 03:05:22 2012
config:
NAME                         STATE     READ WRITE CKSUM
syspool                      ONLINE       0     0     0
mirror-0                   ONLINE       0     0     0
c0t5000C500125559D3d0s0  ONLINE       0     0     0
c0t5000C500125523A3d0s0  ONLINE       0     0     0
errors: No known data errors

Havent tried dd yet

RE: How to troubleshoot performance - Added by Linda Kateley over 2 years ago

I agree with jeff.

The thing that will give the best perf is good disk layout. more vdev's the better. If i take all of my disks and put them into a big raidz pool, i will most likely have the speed on 1 disk. The more disks i can write to, the better the perf

Next you look for bottlenecks

vmstat will show if your cpu or memory are bottlenecked

iostat will show if any single disk or disks are bottlenecked

network gets a little trickier.. large frames are recommended.

RE: How to troubleshoot performance - Added by Peter Braun over 2 years ago

Ok, here comes the vmstat

root@nx:/export/home/admin# vmstat 2
kthr      memory            page            disk          faults      cpu
r b w   swap  free  re  mf pi po fr de sr cd cd cd cd   in   sy   cs us sy id
0 1 0 723336 1279788 21 2019 0 0  0  0  0 39  9 39  9 12509 5315 8822 1  2 97
1 0 0 814324 1372960 7  41  0  0  0  0  0  0 69  0 130 11788 699 6832 0  2 98
0 0 0 810276 1368996 0   5  0  0  0  0  0  0  3  0  7 10733 489 3763  0  6 94
1 0 0 806824 1365600 0   5  0  0  0  0  0  0  5  0 13 9801  486 2770  0  1 99
0 0 0 801420 1360208 0   5  0  0  0  0  0  0  3  0  6 9991  698 3712  0  1 99
1 0 0 792240 1351028 0   5  0  0  0  0  0  0 119 0 104 11375 508 5553 0  2 98
0 0 0 781640 1340428 0   5  0  0  0  0  0  0  9  0  8 10848 516 5536  0  1 99
0 0 0 777912 1336740 0   5  0  0  0  0  0  0 48  0 25 9670  485 2456  0  1 99
1 0 0 777240 1336276 0   5  0  0  0  0  0  0  0  0  3 9194  484 1017  0  0 100
1 0 0 775412 1334640 0   5  0  0  0  0  0  0 23  0 19 10121 780 3376  0  1 99

Still i dont see bottleneck.

Disks looks like idle:

root@nx:/export/home/admin# zpool iostat 1
capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
BIG         1.43T  5.82T    151    355  1.17M  2.34M
TB           385G   543G      3     18   409K   536K
syspool     14.4G  53.8G      0      6  40.8K  57.4K
----------  -----  -----  -----  -----  -----  -----
BIG         1.43T  5.82T      5      0  47.5K      0
TB           385G   543G      0      0      0      0
syspool     14.4G  53.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
BIG         1.43T  5.82T     33      0   269K      0
TB           385G   543G      0      0      0      0
syspool     14.4G  53.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
BIG         1.43T  5.82T     77      0   617K      0
TB           385G   543G      0      0   127K      0
syspool     14.4G  53.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
BIG         1.43T  5.82T     40      0   324K      0
TB           385G   543G      0      0      0      0
syspool     14.4G  53.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
BIG         1.43T  5.82T     27      0   221K      0
TB           385G   543G      0      0      0      0
syspool     14.4G  53.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
BIG         1.43T  5.82T     12      0   103K      0
TB           385G   543G      0      0      0      0
syspool     14.4G  53.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
BIG         1.43T  5.82T     17      0   142K      0
TB           385G   543G      0      0      0      0
syspool     14.4G  53.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
BIG         1.43T  5.82T     23      0   190K      0
TB           385G   543G      0      0      0      0
syspool     14.4G  53.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
BIG         1.43T  5.82T    342      0  2.67M      0
TB           385G   543G     39      0  4.94M      0
syspool     14.4G  53.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----

Vdev layout is stripped mirror - so no raidz pool in use.

RE: How to troubleshoot performance - Added by Linda Kateley over 2 years ago

so what we can see in the vmstat is cpu is mostly idle.. last column shows mostly 99% idle. no memory pressure. sr shows if the page daemon starts scan for pages to free..The one thing i see that's a little high is interrupts, might be something i might look at...

in iostat i like to see the writes distributed amongst the disks.. i probably should have specified iostat -xpn which will show more details. i am not seeing alot of traffic to the disks. but spikes at the beginning and the end. You know the disks can take alot more.. You can see they can take 2.5MB..

so we know it is something else.

Are these sas drives or sata? If they are sas we should be able to increase the queue depth by adding this line to /etc/system and rebooting

set zfs:zfsvdevmax_pending = 10

RE: How to troubleshoot performance - Added by Jeff Gibson over 2 years ago

I might be mis interpreting the iostat -C portion, but doesn't it show that it's 144%b on c0 that is servicing the sata disks?

I also don't see any LOG devices. Depending on how you have your iSCSI/COMSTAR writeback cache you may be running into the maximum throughput of the ZIL being written to your pool.

RE: How to troubleshoot performance - Added by Peter Braun over 2 years ago

The drives are SATA - Enterprise WD-RE4 7200rpm.

zfsvdevmax_pendion option is on 10 value.

We dont have LOG device now. We used to have mirrored Intel X25E but there was no difference or even worse.

Here comes iostat -xpn

root@nx:/export/home/admin# iostat -xpn
extended device statistics
r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
34.6    4.4  419.1  362.9  0.0  0.0    0.4    0.4   0   1 c5d0
8.3    0.9   93.0   64.2  0.0  0.0    0.1    0.3   0   0 c5d1
34.6    4.4  418.5  362.7  0.0  0.0    0.4    0.4   0   1 c4d0
8.3    0.9   93.0   64.3  0.0  0.0    0.1    0.3   0   0 c4d1
8.7    8.9  904.0  535.9  0.0  0.1    0.0    6.5   0   7 c0t5000C5001065807Bd0
8.7    8.9  904.0  535.9  0.0  0.1    0.0    6.5   0   7 c0t5000C5001065807Bd0s0
0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t5000C5001065807Bd0s8
0.0    0.0    0.0    0.0  0.0  0.0    0.0    1.4   0   0 c0t5000C5001065807Bd0p0
0.0    0.0    0.0    0.0  0.0  0.0    0.0   19.1   0   0 sd4,h
8.6    9.0  888.8  535.9  0.0  0.1    0.0    6.0   0   6 c0t5000C500141090AFd0
8.6    9.0  888.8  535.9  0.0  0.1    0.0    6.0   0   6 c0t5000C500141090AFd0s0
0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t5000C500141090AFd0s8
0.0    0.0    0.0    0.0  0.0  0.0    0.0   11.3   0   0 c0t5000C500141090AFd0p0
0.0    0.0    0.0    0.0  0.0  0.0    0.0   10.2   0   0 sd5,h
123.7   14.6 1468.0  779.4  0.0  0.3    0.0    2.0   0  17 c0t50014EE057ADBFC5d0
123.7   14.6 1468.0  779.4  0.0  0.3    0.0    2.0   0  17 c0t50014EE057ADBFC5d0s0
0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t50014EE057ADBFC5d0s8
0.0    0.0    0.0    0.0  0.0  0.0    0.0    1.1   0   0 c0t50014EE057ADBFC5d0p0
0.0    0.0    0.0    0.0  0.0  0.0    0.0    8.8   0   0 sd6,h
123.5   14.6 1464.2  779.3  0.0  0.3    0.0    2.0   0  16 c0t50014EE0AD036CD7d0
123.5   14.6 1464.2  779.3  0.0  0.3    0.0    2.0   0  16 c0t50014EE0AD036CD7d0s0
0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t50014EE0AD036CD7d0s8
0.0    0.0    0.0    0.0  0.0  0.0    0.0    8.1   0   0 c0t50014EE0AD036CD7d0p0
0.0    0.0    0.0    0.0  0.0  0.0    0.0   15.1   0   0 sd7,h
123.6   14.6 1469.4  779.4  0.0  0.3    0.0    2.0   0  17 c0t50014EE002587CC2d0
123.6   14.6 1469.4  779.4  0.0  0.3    0.0    2.0   0  17 c0t50014EE002587CC2d0s0
0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t50014EE002587CC2d0s8
0.0    0.0    0.0    0.0  0.0  0.0    0.0    1.2   0   0 c0t50014EE002587CC2d0p0
0.0    0.0    0.0    0.0  0.0  0.0    0.0    7.3   0   0 sd8,h
124.2   14.5 1488.6  779.7  0.0  0.3    0.0    2.0   0  17 c0t50014EE0AD037437d0
124.2   14.5 1488.5  779.7  0.0  0.3    0.0    2.0   0  17 c0t50014EE0AD037437d0s0
0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t50014EE0AD037437d0s8
0.0    0.0    0.0    0.0  0.0  0.0    0.0    5.9   0   0 c0t50014EE0AD037437d0p0
0.0    0.0    0.0    0.0  0.0  0.0    0.0   11.3   0   0 sd9,h
114.7   13.4 1362.9  718.1  0.0  0.3    0.0    2.0   0  15 sd10
114.7   13.4 1362.9  718.1  0.0  0.3    0.0    2.0   0  15 sd10,a
0.0    0.0    0.0    0.0  0.0  0.0    0.0    7.9   0   0 sd10,h
0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 sd10,i
0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.2   0   0 sd10,q
123.5   14.6 1464.1  779.4  0.0  0.3    0.0    2.0   0  17 c0t50014EE057ADFADCd0
123.5   14.6 1464.1  779.4  0.0  0.3    0.0    2.0   0  17 c0t50014EE057ADFADCd0s0
0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t50014EE057ADFADCd0s8
0.0    0.0    0.0    0.0  0.0  0.0    0.0    1.1   0   0 c0t50014EE057ADFADCd0p0
0.0    0.0    0.0    0.0  0.0  0.0    0.0    7.5   0   0 sd12,h
123.7   14.6 1469.2  779.4  0.0  0.3    0.0    2.0   0  17 c0t50014EE057ADCA40d0
123.7   14.6 1469.2  779.4  0.0  0.3    0.0    2.0   0  17 c0t50014EE057ADCA40d0s0
0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t50014EE057ADCA40d0s8
0.0    0.0    0.0    0.0  0.0  0.0    0.0    1.3   0   0 c0t50014EE057ADCA40d0p0
0.0    0.0    0.0    0.0  0.0  0.0    0.0    9.2   0   0 sd13,h
2.3    6.6  125.0   57.5  0.0  0.0    0.0    2.1   0   1 c0t5000C500125559D3d0
2.3    6.6  125.0   57.5  0.0  0.0    0.0    2.1   0   1 c0t5000C500125559D3d0s0
0.0    0.0    0.0    0.0  0.0  0.0    0.0    8.2   0   0 c0t5000C500125559D3d0s2
0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.4   0   0 c0t5000C500125559D3d0s3
0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t5000C500125559D3d0s8
0.0    0.0    0.0    0.0  0.0  0.0    0.0    9.3   0   0 c0t5000C500125559D3d0p0
0.0    0.0    0.0    0.0  0.0  0.0    0.0    8.5   0   0 c0t5000C500125559D3d0p1
2.1    6.6  116.0   57.5  0.0  0.0    0.0    2.2   0   1 c0t5000C500125523A3d0
2.1    6.6  116.0   57.5  0.0  0.0    0.0    2.2   0   1 c0t5000C500125523A3d0s0
0.0    0.0    0.0    0.0  0.0  0.0    0.0    7.9   0   0 c0t5000C500125523A3d0s2
0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.9   0   0 c0t5000C500125523A3d0s3
0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t5000C500125523A3d0s8
0.0    0.0    0.0    0.0  0.0  0.0    0.0    9.2   0   0 c0t5000C500125523A3d0p0
0.0    0.0    0.0    0.0  0.0  0.0    0.0    8.3   0   0 c0t5000C500125523A3d0p1
123.6   14.6 1464.5  779.3  0.0  0.3    0.0    2.0   0  16 c0t50014EE002B35BE0d0
123.6   14.6 1464.5  779.3  0.0  0.3    0.0    2.0   0  16 c0t50014EE002B35BE0d0s0
0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t50014EE002B35BE0d0s8
0.0    0.0    0.0    0.0  0.0  0.0    0.0    1.1   0   0 c0t50014EE002B35BE0d0p0
0.0    0.0    0.0    0.0  0.0  0.0    0.0    9.1   0   0 sd18,h
108.5   25.7 1282.4  943.2  0.0  0.2    0.0    1.8   0  17 c0t50014EE0AD08B9C5d0
108.5   25.7 1282.9  943.6  0.0  0.2    0.0    1.8   0  17 c0t50014EE0AD08B9C5d0s0
0.0    0.0    0.0    0.0  0.0  0.0    0.0    1.3   0   0 c0t50014EE0AD08B9C5d0s2
0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.8   0   0 c0t50014EE0AD08B9C5d0s8
0.0    0.0    0.0    0.0  0.0  0.0    0.0    5.8   0   0 c0t50014EE0AD08B9C5d0p0
0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 sd19,h

Regarding writeback / flush do disk. We have default option of nexenta, even we are on ups - we dont want to risk data.

During workload we see most of the time:

r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot device
1981.7    5.5 22832.2    4.0  0.0  2.1    0.0    1.1   0 171   0   0   0   0  c0

But when disk flush occures - which is every 15s I think:

r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot device
24.0 1384.2  255.9 59703.3  0.0  1.3    0.0    1.0   0  91   0  0   0   0  c0

How to interpret %b %w and kr/s kw/s?

RE: How to troubleshoot performance - Added by Peter Braun over 2 years ago

Thanks for the article --- when we see %b 141 --- is that OK or NOT?

Another things we see in the system is:

Apr 7 11:02:43 nx unix: [ID 954099 kern.info] NOTICE: IRQ19 is being shared by drivers with different interrupt levels. Apr 7 11:02:43 nx This may result in reduced system performance. Apr 7 11:02:43 nx unix: [ID 954099 kern.info] NOTICE: IRQ19 is being shared by drivers with different interrupt levels. Apr 7 11:02:43 nx This may result in reduced system performance. Apr 7 11:02:50 nx unix: [ID 954099 kern.info] NOTICE: IRQ19 is being shared by drivers with different interrupt levels. Apr 7 11:02:50 nx This may result in reduced system performance. Apr 7 11:02:50 nx unix: [ID 954099 kern.info] NOTICE: IRQ19 is being shared by drivers with different interrupt levels. Apr 7 11:02:50 nx This may result in reduced system performance. Apr 7 11:07:52 nx unix: [ID 954099 kern.info] NOTICE: IRQ19 is being shared by drivers with different interrupt levels. Apr 7 11:07:52 nx This may result in reduced system performance. Apr 7 11:07:52 nx unix: [ID 954099 kern.info] NOTICE: IRQ19 is being shared by drivers with different interrupt levels. Apr 7 11:07:52 nx This may result in reduced system performance.

Any idea what to troubleshoot this message? Maybe this could be cause for slow performance?

RE: How to troubleshoot performance - Added by Craig Herring over 2 years ago

We are seeing these errors as well. What info do we need in troubleshooting this issue?

RE: How to troubleshoot performance - Added by Peter Braun over 2 years ago

We tried "echo ::interrupts | mdb -k" to print interrupts usage on the system.

IRQ  Vect IPL Bus    Trg Type   CPU Share APIC/INT# ISR(s)
3    0xb1 12  ISA    Edg Fixed  3   1     0x0/0x3   asyintr
4    0xb0 12  ISA    Edg Fixed  2   1     0x0/0x4   asyintr
9    0x81 9   PCI    Lvl Fixed  1   1     0x0/0x9   acpi_wrapper_isr
11   0xd1 14  PCI    Lvl Fixed  2   1     0x0/0xb   hpet_isr
16   0x86 9   PCI    Lvl Fixed  1   1     0x0/0x10  uhci_intr
18   0x84 9   PCI    Lvl Fixed  7   2     0x0/0x12  uhci_intr, ehci_intr
19   0x88 9   PCI    Lvl Fixed  3   4     0x0/0x13  0, ata_intr, ata_intr,
uhci_intr
21   0x87 9   PCI    Lvl Fixed  2   1     0x0/0x15  uhci_intr
23   0x85 9   PCI    Lvl Fixed  0   2     0x0/0x17  uhci_intr, ehci_intr
24   0x82 7   PCI    Edg MSI    3   1     -         pcieb_intr_handler
25   0x40 5   PCI    Edg MSI    4   1     -         mptsas_intr
26   0x30 4   PCI    Edg MSI    5   1     -         pcieb_intr_handler
27   0x83 7   PCI    Edg MSI    6   1     -         pcieb_intr_handler
28   0x60 6   PCI    Edg MSI-X  3   1     -         igb_intr_tx_other
29   0x61 6   PCI    Edg MSI-X  5   1     -         igb_intr_rx
30   0x62 6   PCI    Edg MSI-X  6   1     -         ixgbe_intr_msix
31   0x63 6   PCI    Edg MSI-X  7   1     -         ixgbe_intr_msix
32   0x20 2          Edg IPI    all 1     -         cmi_cmci_trap
33   0x64 6   PCI    Edg MSI-X  0   1     -         ixgbe_intr_msix
34   0x65 6   PCI    Edg MSI-X  1   1     -         ixgbe_intr_msix
35   0x66 6   PCI    Edg MSI-X  2   1     -         ixgbe_intr_msix
36   0x67 6   PCI    Edg MSI-X  3   1     -         ixgbe_intr_msix
37   0x68 6   PCI    Edg MSI-X  4   1     -         ixgbe_intr_msix
38   0x69 6   PCI    Edg MSI-X  5   1     -         ixgbe_intr_msix
39   0x6a 6   PCI    Edg MSI-X  6   1     -         ixgbe_intr_msix
40   0x6b 6   PCI    Edg MSI-X  7   1     -         ixgbe_intr_msix
41   0x6c 6   PCI    Edg MSI-X  0   1     -         ixgbe_intr_msix
42   0x6d 6   PCI    Edg MSI-X  1   1     -         ixgbe_intr_msix
43   0x6e 6   PCI    Edg MSI-X  2   1     -         ixgbe_intr_msix
44   0x6f 6   PCI    Edg MSI-X  3   1     -         ixgbe_intr_msix
45   0x70 6   PCI    Edg MSI-X  4   1     -         ixgbe_intr_msix
46   0x71 6   PCI    Edg MSI-X  5   1     -         ixgbe_intr_msix
47   0x72 6   PCI    Edg MSI-X  3   1     -         igb_intr_tx_other
48   0x73 6   PCI    Edg MSI-X  7   1     -         igb_intr_rx
160  0xa0 0          Edg IPI    all 0     -         poke_cpu
208  0xd0 14         Edg IPI    all 1     -         kcpc_hw_overflow_intr
209  0xd3 14         Edg IPI    all 1     -         cbe_fire
210  0xd4 14         Edg IPI    all 1     -         cbe_fire
240  0xe0 15         Edg IPI    all 1     -         xc_serv
241  0xe1 15         Edg IPI    all 1     -         apic_error_intr

Do you see anything suspicious?

Content-Type: text/html; charset=utf-8 Set-Cookie: _redmine_session=BAh7BiIKZmxhc2hJQzonQWN0aW9uQ29udHJvbGxlcjo6Rmxhc2g6OkZsYXNoSGFzaHsABjoKQHVzZWR7AA%3D%3D--cebfb08d300a85bd88dafd1422210ebe7c9a5873; path=/; HttpOnly Status: 500 Internal Server Error X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 2.0.3 ETag: "a7b66538509ac810e613132de6d6799c" X-Runtime: 906ms Content-Length: 31664 Cache-Control: private, max-age=0, must-revalidate redMine 500 error

Internal error

An error occurred on the page you were trying to access.
If you continue to experience problems please contact your redMine administrator for assistance.

Back