<br>Hi list,<br><br>Unfortunately, we are no more able to reproduce the bug. It seems that we haven't done any modifications but switching PFS off on both side. <br>Switching PFS on again on both side doesn't reproduce the bug, but I can't guarantee that our partner hadn't modified something else in his configuration.
<br><br>For informations, the Mathias's patch was : <br><br>--- openswan-2.4.7/programs/pluto/demux.c Fri Jan 12 11:35:21 2007<br>+++ openswan-2.4.7-debug
/programs/pluto/demux.c Fri Jan 12 12:16:07 2007<br>@@ -2411,7 +2411,7 @@<br> * we can only be in calculating state if state is ignore,<br> * or suspended.<br> */<br>- passert(result == STF_IGNORE || result == STF_SUSPEND ||
<br>st->st_calculating==FALSE);<br>+ passert(result == STF_INLINE || result == STF_IGNORE || result ==<br>STF_SUSPEND || st->st_calculating==FALSE);<br><br>We have applied this patch on our production server and will survey how they work.
<br><br>Thanks,<br>Jean-Michel.<br><br><br><div><span class="gmail_quote">2007/3/5, Paul Wouters <<a href="mailto:paul@xelerance.com" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">
paul@xelerance.com</a>>:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
On Mon, 5 Mar 2007, Pompon wrote:<br><br>> Subject: Re: [Openswan Users] Pluto Segmentation fault in 2.4.7<br>><br>> Following our last week experiment with Segmentation fault, we moved the<br>> connection to a staging platform, and we've got a backtrace of the bug :
<br><br>> #0 fmt_log (buf=0xbfe61780 "", buf_len=1024, fmt=0x80bdf20 "ASSERTION<br>> FAILED at %s:%lu: %s", ap=0x0) at log.c:149<br>> 149 snprintf(bp, be - bp, "\"%s\"", c->name);
<br>><br>> (gdb) bt<br>> #0 fmt_log (buf=0xbfe61780 "", buf_len=1024, fmt=0x80bdf20 "ASSERTION<br>> FAILED at %s:%lu: %s", ap=0x0) at log.c:149<br>> #1 0x08056cd1 in openswan_loglog (mess_no=135371408, message=0x8119a90
<br><br>The logging routines in 2.4.7 had a problem with displaying certain<br>options. Can you verify that openswan 2.4.8rc1 (available in the testing/<br>subdirectory) still has this problem? It looks like a diffrent bug, but
<br>before we start hunting ghosts, I'd like to be sure.<br><br>> "Hy\227\021\b", '' <repeats 192 times>...) at log.c:434<br>> #2 0x08057c30 in passert_fail (pred_str=0x8119a90 "Hy\227\021\b", ''
<br>> <repeats 192 times>...,<br>> file_str=0x8119a90 "Hy\227\021\b", '' <repeats 192 times>...,<br>> line_no=135371408) at log.c:606<br>> #3 0x0807b3c4 in complete_state_transition (mdp=0x80edd6c,
<br>> result=STF_INLINE) at demux.c:2738<br>> #4 0x080795f5 in process_packet (mdp=0x80edd6c) at demux.c:2352<br>> #5 0x0807bcbc in comm_handle (ifp=0x8116500) at demux.c:1223<br>> #6 0x0805d070 in call_server () at
server.c:1166<br>> #7 0x0805b0d1 in main (argc=12, argv=0xbfe62264) at plutomain.c:787<br>><br>><br>> The PFS is declared on both side, and our partner use a linksys VPN (and not<br>> openswan as I first thought). Applying the Matthias's patch make pluto not
<br>> crashing even if the tunnels don't go up. In this case, here is the related<br>> auth-log :<br><br>What was Matthias's patch?<br><br>> Mar 5 10:12:52 tamerlane pluto[12459]: "faulty_connection" #10: responding
<br>> to Main Mode<br>> Mar 5 10:12:52 tamerlane pluto[12459]: "faulty_connection" #10:<br>> OAKLEY_DES_CBC is not supported. Attribute OAKLEY_ENCRYPTION_ALGORITHM<br>> Mar 5 10:12:52 tamerlane pluto[12459]: "faulty_connection" #10:
<br>> OAKLEY_DES_CBC is not supported. Attribute OAKLEY_ENCRYPTION_ALGORITHM<br>> Mar 5 10:12:52 tamerlane pluto[12459]: "faulty_connection" #10: transition<br>> from state STATE_MAIN_R0 to state STATE_MAIN_R1
<br>> Mar 5 10:12:52 tamerlane pluto[12459]: "faulty_connection" #10:<br>> STATE_MAIN_R1: sent MR1, expecting MI2<br>> Mar 5 10:12:55 tamerlane pluto[12459]: "faulty_connection" #10: transition
<br>> from state STATE_MAIN_R1 to state STATE_MAIN_R2<br>> Mar 5 10:12:55 tamerlane pluto[12459]: "faulty_connection" #10:<br>> STATE_MAIN_R2: sent MR2, expecting MI3<br>> Mar 5 10:13:10 tamerlane pluto[12459]: "faulty_connection" #10: Main mode
<br>> peer ID is ID_IPV4_ADDR: '<a href="http://222.126.123.139" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">222.126.123.139</a>'<br>> Mar 5 10:13:10 tamerlane pluto[12459]: "faulty_connection" #10: I did not
<br>> send a certificate because I do not have one.
<br>> Mar 5 10:13:10 tamerlane pluto[12459]: "faulty_connection" #10: transition<br>> from state STATE_MAIN_R2 to state STATE_MAIN_R3<br>> Mar 5 10:13:10 tamerlane pluto[12459]: "faulty_connection" #10:
<br>> STATE_MAIN_R3: sent MR3, ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY<br>> cipher=oakley_3des_cbc_192 prf=oakley_sha group=modp1024}<br>> Mar 5 10:13:10 tamerlane pluto[12459]: "faulty_connection" #10: Dead Peer
<br>> Detection (RFC 3706): not enabled because peer did not advertise it<br>> Mar 5 10:13:11 tamerlane pluto[12459]: "faulty_connection" #11: only<br>> OAKLEY_GROUP_MODP1024 and OAKLEY_GROUP_MODP1536 supported for PFS
<br>> Mar 5 10:13:11 tamerlane pluto[12459]: "faulty_connection" #11: sending<br>> encrypted notification BAD_PROPOSAL_SYNTAX to <a href="http://222.126.123.139:500" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">
222.126.123.139:500</a><br><br>The other end is configured with 1des/3des and DH group1 (modp768) which are too
<br>weak and not supported per default on openswan. But of course, we shouldn't<br>crash on this.<br><br>> Switching off the PFS on both side make the tunnel works using either the<br>> patched or not patched pluto binary.
<br><br>Yes, then you are not stuck on the remote's flawed proposal of modp768.<br><br>I guess we didnt catch this because our automatic testing all uses the same<br>pluto binary, so either both ends support or or both ends do not.
<br><br>Thanks for the report,<br><br>Paul<br></blockquote></div><br>