Nanno Langstraat wrote:
> The intended use of the patch is to call SHA1_Load_State() directly, not
> SHA1_Init() followed by SHA1_Load_State().
>
> In other words:
>
> * The application starts by freely choosing either SHA1_Init() or
> SHA1_Load_State()
> * The application ends by freely choosing either SHA1_Final() or
> SHA1_Drop()


This makes the API lifecycle distinct, either you are using the SHA_CTX
for Init/Final or Load_State/Drop you cant mix, is there a technical
reason why this is so? Like I pointed out you used to be able to
memcpy() the SHA_CTX with OpenSSL before the concept of an 'engine' was
around.

From the points above I now understand that your intention is for
SHA1_Load_State() to be allowed to be called on an uninitialized
SHA_CTX. Which voids my point on SHA1_Init_FromState() and thats fine.

SHA1_Drop() still seems like a bad naming choice to me and is not
self-documenting. Maybe these sorts of voodoo issues can be cleared up
with good documentation in the man-page. Maybe SHA1_End() is a better
than than SHA1_Drop(). init/end create/destroy load/save pickup/drop
have clear self documenting meaning.



>> I'm not sure if your method of versioning the state information is
>> sufficient, I would like to propose that this problem domain be left
>> mainly upto the application to deal with. My concern stems from
>> different machine types, endianess and natural bitwidth (32/64bit)
>> issues.
>>
>> If OpenSSL is to provide some form of API contract for the
>> representation of "State" to the application that a new function
>> method be used "u_int32_t SHA1_State_Version(void);". This would also
>> be combined with CPU type + endian information as appropiate.
>>
>> It would be upto the application to store, know and check the 'State
>> Version' that it is using is compatible with the runtime version of
>> OpenSSL that is in use. rather than set a policy on how this 'State
>> Version' information is stored along with the 'State Data'.
>> SHA1_XXXX_State() function should simply deal with 'State Data'.

>
>
> I beg to differ.
>
> Leaving this up to the application is another way of saying that
> inter-operability will happen not at all or badly.
>
> In particular, the patch currently handles endianness and word size,
> which the application inherently can not, because the saved state will
> always be an opaque binary blob as far as the users are concerned. They
> don't know anything about the internal format of SHA1, nor should they.


How do you know its compatible for all CPU architectures and all
possible engines ? I think you are over stating the compatibility your
scheme implies. It is for all intents and purpose opaque data that only
means something when presented to the correct implementation of the
digest that created it.

If you want inter-operability because your applications needs that
guarantee then I think you are going to have to pin the implementation
OpenSSL chooses to use (from all the available engines a given
implementation may have available to it) or use a 3rd party
implementation (or manually copy the OpenSSL source C implementation
into your own revision control so its static in your product).



> With regard to the SHA1_State_Version() call: it is a good addition in
> the sense of 'SHA1_State_Default_Version()'.
>
> But it does not really replace the 'requested_version' parameter of
> SHA1_Save_State() in the current patch: if a distributed process linked
> to a future newer version of OpenSSL needs to inter-operate with a
> process on another machine linked to an older version of OpenSSL, the
> current patch allows the first process to request an older version of
> the SHA1 state blob format so that the older OpenSSL will be able to
> Load() it.


The working state of the SHA_CTX is implementation dependent, sure there
is a common documented algorithm but there is no garuntee the internal
state has to remain the same. One possible example maybe a silcon based
SHA1 engine which would keep state and support the loading/saving of state.

Partly for the reason that a silcon based SHA1 engine maybe in use or
may not be in use for a particular SHA_CTX is a reason (I did not make
clear before) that the "u_int32_t SHA_State_Version(SHA_CTX *);" would
be the prototype. As it depends on how the SHA_CTX was inited on the
system in question as to which implementation of SHA was used and
therefore what opaque internal data you can load/save.


Darryl
__________________________________________________ ____________________
OpenSSL Project http://www.openssl.org
Development Mailing List openssl-dev@openssl.org
Automated List Manager majordomo@openssl.org