Hi Open Swan community,<br><br><div style="margin-left: 40px;">We develop a software that exposes some essential services to remotely deployed client applications. This software will be deployed on a machine containing server-grade hardware like a quad-core processor and 6 GB of RAM, Gigabit Ethernet card, etc. The primary functionality of this software is to serve the several hundreds of client machines. Occasionally, in some deployments, the number of client machines can scale up as high as 5-6 thousands. On each client machine, we would having nearly a dozen different applications communicating with the various services exposed by the server software. All the communication between the server and the various client machines is transferred over SSL. One interesting point to note here is that the communication between server and clients may not be so frequent in terms of every minute, but certainly it will be of the order of a dozen requests per minute when all the applications on each client are put together . So, the server will be facing a tremendous traffic like 1000(clients)*12=12000 requests per minute overall.<br>
<br>Until now we were using plain SSL connections between the each different application on the client machine and the services running on the server software. There is an suggestion put forward by some of the members of the team that works on the project to move to OpenSwan (IPsec) tunnelling feature so that we can consolidate SSL negotiation overhead for the various applications that are running on the client box.<br>
<br>I would like to clarify a few questions/doubts in this regard before we embark on this activity.<br><br>Here are the details about the model we plan to set up between server system and the various client systems:<br>
<ul>
<li>Establish a host-to-host tunnel between server and each client machine.</li><li>Remove the SSL configuration on both the server software and the client applications and let the end-points on each end of each tunnel take care of encryption and decryption of the data.</li>
</ul>Here are my questions/doubts regarding the scalability/performance of this setup:<br><ul><li>First of all, we plan to use CentOS6, so will the kernel support for IPSec in CentOS6 sufficient to establish tunnels between machines or should I need to OpenSwan still?</li>
<li>Can the IPSec software scale as much as we need to support creating a few thousand tunnels on the server machine with the above mentioned hardware configuration?</li><li>What will be the CPU and memory consumption when we make those many tunnels on the server machine?</li>
<li>It is said that a web server scales many times for ordinary https requests than keep-alive https requests; wouldn't that same analogy apply here where having each application make a separate SSL request for a short period of time and closing the connection scale much better than having those many thousands of dedicated tunnels running live, although idly?</li>
<li>we would want the server system to also do some very essential stuff other than managing this tunnelling business, so can this design scale and perform reasonably?<br></li></ul>Sorry for bombarding with so many questions in my first mail to the community mailing list, but I hope the folks here would understand my confusion as to which model is better -- several different short SSL request-response cycles or a dedicated IPSec tunnel that serves the same number of request-response cycles (where applications are unaware of SSL).<br>
<br></div>Thanks and Regards,<br>Samba<br><div style="margin-left: 40px;"><br><br></div>