If a token was already removed from the database but not from the
configuration clearing the tokens will try to remove it again from the
database, which caused a DoesNotExistException to be thrown.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
Otherwise as the tokens were removed from the database but not from the
configuration the next time that the tokens were cleared the previous
tokens were still got from the configuration, and trying to remove them
again from the database ended in a DoesNotExistException being thrown.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
The session token renewal does
1) Read the old token
2) Write a new token
3) Delete the old token
If two processes succeed to read the old token there can be two new tokens because
the queries were not run in a transaction. This is particularly problematic on
clustered DBs where 1) would go to a read node and 2) and 3) go to a write node.
Signed-off-by: Christoph Wurst <christoph@winzerhof-wurst.at>
For passwords bigger than 250 characters, use a bigger key since the
performance impact is minor (around one second to encrypt the password).
For passwords bigger than 470 characters, give up earlier and throw
exeception recommanding admin to either enable the previously enabled
configuration or use smaller passwords.
Signed-off-by: Carl Schwan <carl@carlschwan.eu>
This adds an option to disable storing passwords in the database. This
might be desirable when using single use token as passwords or very
large passwords.
Signed-off-by: Carl Schwan <carl@carlschwan.eu>
The auth token activity logic works as follows
* Read auth token
* Compare last activity time stamp to current time
* Update auth token activity if it's older than x seconds
This works fine in isolation but with concurrency that means that
occasionally the same token is read simultaneously by two processes and
both of these processes will trigger an update of the same row.
Affectively the second update doesn't add much value. It might set the
time stamp to the exact same time stamp or one a few seconds later. But
the last activity is no precise science, we don't need this accuracy.
This patch changes the UPDATE query to include the expected value in a
comparison with the current data. This results in an affected row when
the data in the DB still has an old time stamp, but won't affect a row
if the time stamp is (nearly) up to date.
This is a micro optimization and will possibly not show any significant
performance improvement. Yet in setups with a DB cluster it means that
the write node has to send fewer changes to the read nodes due to the
lower number of actual changes.
Signed-off-by: Christoph Wurst <christoph@winzerhof-wurst.at>
Else you can end up that you renewed your password (LDAP for example).
But they still don't work because you did not use them before you logged
in.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
The IConfig service is documented to handle its data as strings, hence
this changes the code a bit to ensure we store keys as string and
convert them back when reading.
Signed-off-by: Christoph Wurst <christoph@winzerhof-wurst.at>
On some systems with a lot of users this creates a lot of extra DB
writes.
Being able to increase this interval helps there.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
The serialized data in 19 has one property less and this was not
considered in the code. Hence adding a fallback. Moreover I'm changing
the deserialization into an array instead of object, as that is the
safer option.
Signed-off-by: Christoph Wurst <christoph@winzerhof-wurst.at>