DescriptionLocking for jobs
Charmworld runs a few cron jobs to keep the list of avaliable charms
up to date. If charmworld is installed on more than one machines, running
these jobs on all machines would cause unnecessary duplicate work,
like queueing the charms for further processing.
This branch adds a simple locking mechansim for these jobs. The core idea:
A lock is a record in the collection "locks" of the MongoDB with a given
ID. MongoDB can raise a DuplicateKey exception when a record with the ID
'some_name' already exists during a call of
collection.insert({'_id': 'some_name', ...})
("can" means: When I first attempted to use the lock for the jobs review and
core_review, locking simply did not work, while it worked fine in tests and
for the queueing job. It turned out that the parameter "fsync" msut be
specified when pymongo.Connection is instantiated. As the test
test_bad_connection_setup shows, the actual value of this parameter is
not important -- but it must be present...)
Hence lock() ensures that fsync was specified for the given DB connection.
Locks have a lease time; if the lease expires, a now stale lock is deleted
when another process wants to acquire the lock. The related call of
locks.remove(existing_lock) is a possible race condition: Two processes
might try concurrently to remove the stale lock. A simple tests shows that
duplicate remove(existing_lock) calls succeed, even if the second call
just does nothing.
https://code.launchpad.net/~adeuring/charmworld/queue-review-mutex/+merge/150853
(do not edit description out of merge proposal)
Patch Set 1 #
Total comments: 5
MessagesTotal messages: 3
|