[xmlsec] Configuration on 64-bit Linux

Bell, Bill Bill_Bell at mentor.com
Fri Nov 4 12:56:43 PDT 2011


I have a question regarding configuration of XML Sec on 64-Bit Linux.

I have an application that crashes on 64-bit RHEL, it does not crash on other platforms (32/64-bit Windows, 32-bit RHEL).

Working Environment:
XMLSec - version 1.2.16
OpenSSL - 1.0.0a
OS: RHEL5 - 64-bit
Compiler: GCC 4.1.2

In debugging the problem, I have determined that the issue is that xmlSecSize is "unsigned int" in the library, and "size_t" in my application. 
I have identified that the source of the problem is the configure script that correctly detects that size_t is 8 bytes, but then decides to define XMLSEC_NO_SIZE_T since it isn't 4.

The problem is that my application is defining xmlSecSize to size_t and thus the sizes of the structures (in particular xmlSecEncCtx) are computed differently in the library and in my application code, leading to memory corruption when I set structure members.

The line that triggers the problem is line 11793:
if test "$ac_cv_sizeof_size_t" -ne "4" ; then

I have searched the archives and I found the following email threads:
[xmlsec] Xmlsec Issue on Linux x86_64, XMLSEC_NO_SIZE_T - http://www.aleksey.com/pipermail/xmlsec/2007/008006.html
[xmlsec] Crash on Ubuntu - http://www.aleksey.com/pipermail/xmlsec/2009/008778.html
FW: [xmlsec] Bug on hpux-ia64-64 ? - http://www.aleksey.com/pipermail/xmlsec/2009/008759.html

That all seem to be related to 64-bit platforms. Unfortunately, there doesn't seem to be a clear resolution to the problem in any of these threads.

Per the question you posed in the first thread, I have checked out the most recent sources from GIT, and have run autogen.sh. It behaves the same as 1.2.16. Also, I have looked at the configure script in 1.2.18 and it has the same test.

So, my question is: What should be the definition of xmlSecSize on 64-bit Linux platforms? Should it be "size_t" or "unsigned int"?

If it should be size_t, I believe that the configure script can be updated to use:
if test "$ac_cv_sizeof_size_t" -ne "4" && test "$ac_cv_sizeof_size_t" -ne "8" ; then

The questions in the HPUX thread are interesting ones. It seems like the reason that the author saw problems in the digest and signature functions is because of endianness, where the wrong word was getting the result value (the high-end word that was then truncated).
It looks like the OpenSSL API is a bit inconsistent (perhaps for backward compatibility). EVP_DigestUpdate() passes a size_t for the size argument, but EVP_DigestFinal() still passes an "unsigned int *".
So, this leads to another question: Has the XML Sec code been updated to ensure that the right type is passed to the underlying OpenSSL functions, and then cast'ed to xmlSecSize when they need to be saved?
If the answer to this is "yes", then it seems like it might be safe to allow xmlSecSize to be size_t (8-bytes) on 64-bit platforms.

I appreciate your guidance on how to resolve this issue. 

My preference is to allow xmlSecSize to be size_t so that my internal users do not need to change anything, but I want to ensure that it is a safe definition. I do not see any warnings in the build logs, but I would like a more definite answer.

Thanks in advance for your help,

William Bell
Mentor Graphics Corporation
720 494-1141 (Office)

This email and any attachments may contain confidential or privileged information for the sole use of the intended recipient.  Any review, reliance or distribution by others or forwarding without express permission is strictly prohibited.  If you are not the intended recipient, please contact the sender and delete all copies.

More information about the xmlsec mailing list