[Openswan Users] openswan to netscreen, poor throughput, ksoftirqd 100% CPU
kallen at groknaut.net
kallen at groknaut.net
Mon Feb 14 17:01:40 EST 2011
hi there,
i have a tunnel set up between two datacenters using openswan on one
end and a netscreen on the other. i'm seeing what i think to be poor
performance in terms of throughput. i'd like to poll this list to find
out what kind of throughput you are getting if you might be using a
setup similar to mine. i'd also like to ask if you have suggestions
on settings i could tune or fix to get better throughput.
when i fire packets through the tunnel using netcat i only get about
40mbit/sec. when i do this, on the linux router, ksoftirqd generally
pegs at 100% CPU. the latency between the datacenters is 6 to 7ms.
[dc2-db ]$ sudo nc -l 50000 > /dev/zero
[dc1-admin ]$ cat /dev/zero | pv | nc db131.sacpa 50000
103MB 0:00:21 [4.84MB/s]
i've seen mentions of the ksoftirqd issue around. i was hoping either
moving to a newer kernel would solve it (2.6.32-5-amd64) and/or
adjusting smp affinity for the multi-queue NIC would help (i show
below what i did for smp affinity). but in the datacenter1 to datacenter2
throughput test, i still see ksoftirqd pegging on the linux router.
any ideas on how i can get better throughput? any ideas on ksoftirqd?
fwiw, i ran a test to see what theoretical max throughput would be
from openswan to openswan. i setup an openswan tunnel between our
2 linux routers, linux-gw1a and b. they're on the same VLAN. i tested
using iperf and netcat, and got 107mbit/sec.
all the details are below.
thanks very much in advance,
kallen
datacenter1 linux router:
* openswan version 2.6.32.
* linux is Debian Squeeze (6.0)
* kernel is 2.6.32-5-amd64
* iptables v1.4.8
datacenter2 netscreen:
* netscreen SSG-550M
* screenos 6.3.0r5.0
the tunnel architecture:
datacenter1 datacenter2
openswan gw netscreen
10.8.13.18 5.5.5.22 [==tunnel==] 6.6.6.71 10.1.32.20/19
dc1-admin is behind 10.8.13.18
dc2-db is behind 10.1.32.20/19
[linux-gw1b ]# lspci -v | grep -i eth
02:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
03:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
[linux-gw1b ]# grep eth /proc/interrupts
27: 7115496 22031 21822 21479 PCI-MSI-edge eth0-rx-0
28: 30138 8897368 30295 30517 PCI-MSI-edge eth0-tx-0
29: 56 98 52 71 PCI-MSI-edge eth0
31: 3395871 5238067 37752 37951 PCI-MSI-edge eth1-rx-0
32: 63795 64125 8904078 64103 PCI-MSI-edge eth1-tx-0
33: 3 3 1 2 PCI-MSI-edge eth1
[linux-gw1b ]# for i in 27 28 29 31 32 33; do echo -n "IRQ $i: "; cat /proc/irq/$i/smp_affinity; done
IRQ 27: 1
IRQ 28: 2
IRQ 29: f
IRQ 31: 3
IRQ 32: 4
IRQ 33: f
[linux-gw1b ]# ipsec verify
Checking your system to see if IPsec got installed and started correctly:
Version check and ipsec on-path [OK]
Linux Openswan U2.6.32/K2.6.32-5-amd64 (netkey)
Checking for IPsec support in kernel [OK]
SAref kernel support [N/A]
NETKEY: Testing for disabled ICMP send_redirects [OK]
NETKEY detected, testing for disabled ICMP accept_redirects [OK]
Checking that pluto is running [OK]
Pluto listening for IKE on udp 500 [OK]
Pluto listening for NAT-T on udp 4500 [FAILED]
Two or more interfaces found, checking IP forwarding [FAILED]
Checking NAT and MASQUERADEing [OK]
Checking for 'ip' command [OK]
Checking /bin/sh is not /bin/dash [WARNING]
Checking for 'iptables' command [OK]
Opportunistic Encryption DNS checks:
Looking for TXT in forward dns zone: linux-gw1b [MISSING]
Does the machine have at least one non-private address? [OK]
Looking for TXT in reverse dns zone: 22.5.5.5.in-addr.arpa. [MISSING]
[linux-gw1b ]# ipsec auto --verbose --status | grep "erouted; eroute owner: " | wc -l
153
version 2.0
config setup
interfaces=%defaultroute
virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12,%v4:!172.16.3.0/24,%v4:!172.16.4.0/24,%v4:!172.16.5.0/24,%v4:!10.17.0.0/16,%v4:!172.16.15.0/24,%v4:!172.16.62.0/24,%v4:!172.16.129.0/24,%v4:!192.168.64.0/24,%v4:!192.168.100.0/24
protostack=netkey
conn mytunnel
type=tunnel
left=5.5.5.22
leftnexthop=5.5.5.100
leftsourceip=10.8.13.18
leftsubnets={172.16.3.0/24 172.16.4.0/24 172.16.5.0/24 10.17.0.0/16 172.16.15.0/24 172.16.62.0/24 172.16.129.0/24 192.168.64.0/24 192.168.100.0/24}
right=6.6.6.71
rightnexthop=6.6.6.66
rightsubnets={10.1.0.0/16 172.17.0.0/19 192.168.49.0/24 192.168.110.0/24 192.168.111.0/24 192.168.112.0/24 192.168.141.0/24 192.168.130.0/23 192.168.132.0/24 192.168.151.0/24 192.168.43.0/24 192.168.44.0/23 192.168.210.0/23 192.168.212.0/24 192.168.121.0/24 192.168.220.0/23 192.168.222.0/24}
authby=secret
auto=add
pfs=no
sysctl tuning:
net.ipv4.conf.tun0.send_redirects = 0
net.ipv4.conf.tun0.accept_redirects = 0
net.ipv4.conf.eth1.send_redirects = 0
net.ipv4.conf.eth1.accept_redirects = 0
net.ipv4.conf.eth0.send_redirects = 0
net.ipv4.conf.eth0.accept_redirects = 0
net.ipv4.conf.lo.send_redirects = 0
net.ipv4.conf.lo.accept_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.tcp_fin_timeout = 30
net.core.netdev_max_backlog = 8192
net.ipv4.tcp_max_orphans = 65536
net.ipv4.tcp_max_syn_backlog = 8192
net.ipv4.tcp_max_tw_buckets = 524288
net.core.rmem_max = 1048576
net.core.wmem_max = 1048576
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.tcp_wmem = 4096 43690 4194304
net.core.netdev_budget=1000
More information about the Users
mailing list