[Openswan Users] RHEL4 pluto segmentation faults and restarts

John Mravunac JohnMravunac at citect.com
Thu Apr 14 23:59:15 CEST 2005


Hi,
 
After experiencing many problems with running Openswan 2.3.0 on RHEL3, I
decided to give RHEL4 a try. I compiled and installed both the userland
and the KLIPS kernel module and everything appeared to start fine. BUT,
then I noticed that no tunnels were up and pluto kept restarting after a
segmentation fault:
 
 
Apr 15 00:47:20 one kernel: Unable to handle kernel NULL pointer
dereference at virtual address 00000000
Apr 15 00:47:20 one kernel:  printing eip:
Apr 15 00:47:20 one kernel: f8ef67cd
Apr 15 00:47:20 one kernel: *pde = 3d31a067
Apr 15 00:47:20 one kernel: Oops: 0002 [#1]
Apr 15 00:47:20 one kernel: Modules linked in: ipsec(U) md5 ipv6
parport_pc lp parport autofs4 i2c_dev i2c_core sunrpc button battery ac
uhci_hcd ehci_hcd
hw_random e1000 e100 mii tg3 floppy dm_snapshot dm_zero dm_mirror ext3
jbd dm_mod cciss sd_mod scsi_mod
Apr 15 00:47:20 one kernel: CPU:    0
Apr 15 00:47:20 one kernel: EIP:    0060:[<f8ef67cd>]    Not tainted VLI
Apr 15 00:47:20 one kernel: EFLAGS: 00010202   (2.6.9-5.0.3.EL)
Apr 15 00:47:20 one ipsec__plutorun: /usr/local/lib/ipsec/_plutorun:
line 221:  3185 Segmentation fault      /usr/local/libexec/ipsec/pluto
--nofork --secr
etsfile /etc/ipsec.secrets --ipsecdir /etc/ipsec.d --debug-none
--uniqueids
Apr 15 00:47:20 one kernel: EIP is at aes_32+0x3/0x499 [ipsec]
Apr 15 00:47:20 one ipsec__plutorun: !pluto failure!:  exited with error
status 139 (signal 11)
Apr 15 00:47:20 one kernel: eax: f7936800   ebx: 00000000   ecx:
00000004   edx: 00000000
Apr 15 00:47:20 one ipsec__plutorun: restarting IPsec after pause...

 
I've tried bringing up the tunnels without using the KLIPS kernel module
and they appear to work fine, but I really do want all ipsec traffic to
pass through an ipsec0 interface.
 
If anybody has any suggestions as to how I can fix the pluto problem 
or 
how to setup the iptables rules when the KLIPS modules is not used, I'd
be extremely appreciative!
 
Regards,
John Mravunac
 
 
 
 


More information about the Users mailing list