[Openswan Users] Scalability and performance of OpenSwan tunnels compared to individual SSL request/response cycles

Samba saasira at gmail.com
Wed Aug 24 08:17:02 EDT 2011


Hi Open Swan community,

We develop a software that exposes some essential services to remotely
deployed client applications. This software will be deployed on a machine
containing server-grade hardware like a quad-core processor and 6 GB of RAM,
Gigabit Ethernet card, etc. The primary functionality of this software is to
serve the several hundreds of client  machines. Occasionally, in some
deployments, the number of client machines can scale up as high as 5-6
thousands. On each client machine, we would having nearly a dozen different
applications communicating with the various services exposed by the server
software. All the communication between the server and the various client
machines is transferred over SSL. One interesting point to note here is that
the communication between server and clients may not be so frequent in terms
of every minute, but certainly it will be of the order of a dozen requests
per minute when all the applications on each client are put together . So,
the server will be facing a tremendous traffic like 1000(clients)*12=12000
requests per minute overall.

Until now we were using plain SSL connections between the each different
application on the client machine and the services running on the server
software. There is an suggestion put forward by some of the members of the
team that works on the project to move to OpenSwan (IPsec) tunnelling
feature so that we can consolidate SSL negotiation overhead for the various
applications that are running on the client box.

I would like to clarify a few questions/doubts in this regard before we
embark on this activity.

Here are the details about the model we plan to set up between server system
and the various client systems:

   - Establish a host-to-host tunnel between server and each client machine.
   - Remove the SSL configuration on both the server software and the client
   applications and let the end-points on each end of each tunnel take care of
   encryption and decryption of the data.

Here are my questions/doubts regarding the scalability/performance of this
setup:

   - First of all, we plan to use CentOS6, so will the kernel support for
   IPSec in CentOS6 sufficient to establish tunnels between machines or should
   I need to OpenSwan still?
   - Can the IPSec software scale as much as we need to support creating a
   few thousand tunnels on the server machine with the above mentioned hardware
   configuration?
   - What will be the CPU  and memory consumption when we make those many
   tunnels on the server machine?
   - It is said that a web server scales many times  for ordinary https
   requests than keep-alive https requests; wouldn't that same analogy apply
   here where having each application make a separate SSL request for a short
   period of time and closing the connection scale much better than having
   those many thousands of dedicated tunnels running live, although idly?
   - we would want the server system to also do some very essential stuff
   other than managing this tunnelling business, so can this design scale and
   perform reasonably?

Sorry for bombarding with so many questions in my first mail to the
community mailing list, but I hope the folks here would understand my
confusion as to which model is better -- several different short SSL
request-response cycles or a dedicated IPSec tunnel that serves the same
number of request-response cycles (where applications are unaware of SSL).

Thanks and Regards,
Samba
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openswan.org/pipermail/users/attachments/20110824/01bf1760/attachment.html 


More information about the Users mailing list