<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Hi, <br>
Can tdb_lock be converted from spinlock to rwlock?<br>
<br>
From what i know, tdb_lock is used to protect sa. <br>
ipsec_xmit.c & ipsec_rcv.c
just uses them for reading(get) sa. <br>
<br>
<br>
While experimenting with KLIPS, i converted this tdb_lock to
rwlock_t and performed the below given test. <br>
<br>
I got desired results but i am not sure about any side-effects
of this conversion. Please provide your valuable input.<br>
<br>
<i><br>
Test setup details:<br>
LAN1 ----------<br>
Test Machine
----------(Ipsec tunnel)---------------- Road warrior <br>
LAN2 ----------<br>
<br>
<br>
Test Machine has 8 cores, I have binded interrupts to only
3 of them: <br>
LAN1 -- cpu1 <br>
LAN2 -- cpu2<br>
Roadwarrior -- cpu3<br>
<br>
Traffic flows from LAN1 and LAN2 to Road warrior. (I used
iperf to generate traffic: iperf server is at road warrior and
clients in LAN1 and LAN2).<br>
<br>
<br>
Changes done:<br>
1) in ipsec_tunnel.c<br>
dev->tx_queue_len = 0; /* No
qdisc */ (Earlier it was 10)<br>
dev->features |= NETIF_F_LLTX; /* No
tx lock */<br>
<br>
2)Converted tdb_lock from spinlock_t to rwlock_t<br>
Instead of spin_lock_bh used <br>
read_lock_bh in
ipsec_xmit.c and ipsec_rcv.c<br>
write_lock_bh elsewhere<br>
Got throughput:<br>
Before Change: 225 Mbps (cpu1 and cpu2 are not being
100% utilized) <br>
After Change: 350 Mbps (both cpus1 and cpu2 are
being 100% utilized)<br>
</i><br>
<br>
Also, One more question: Is there any know side effect of
dev->tx_queue_len=0? Why is it 10 by default? <br>
<br>
Regards,<br>
Jagdish Motwani <br>
Software Engineer<br>
Elitecore Technologies Pvt. Ltd.
</body>
</html>