store: force remote stores to always check the shared cache

Summary:
In a remotecontent/metadatastore a `get` request first runs prefetch, then reads
the resulting data from the shared cache store. Before this patch, the prefetch
would not download a value if it existed in the local data store, which means
nothing would be added to the shared cache store, and the `get` would fail. This
patch changes the remote stores to always prefetch based only on the contents of
the shared cache, so data will always be written.

This issue showed up when attempting to repack pack files that contained
references to nodes that were in the local store (which it didn't have access
to) but not the shared cache.

Test Plan:
Manually verified my issue disappeared. This isn't actually an issue
anymore, since future patches refactor the way repack works to not rely on the
remote stores, so this shouldn't be hit again. But it's a safe change
regardless.

Reviewers: #mercurial, ttung, mitrandir

Reviewed By: mitrandir

Differential Revision: https://phabricator.intern.facebook.com/D3278362

Signature: t1:3278362:1463086099:987d2fdd1c75e518f815c3159473e8cb22a15ba0
This commit is contained in:
Durham Goode 2016-05-16 10:59:09 -07:00
parent e13e9ef243
commit 12ec6ebcb4
2 changed files with 4 additions and 3 deletions

View File

@ -132,7 +132,8 @@ class remotecontentstore(object):
self._shared = shared
def get(self, name, node):
self._fileservice.prefetch([(name, hex(node))], fetchdata=True)
self._fileservice.prefetch([(name, hex(node))], force=True,
fetchdata=True)
return self._shared.get(name, node)
def getdeltachain(self, name, node):

View File

@ -94,8 +94,8 @@ class remotemetadatastore(object):
self._shared = shared
def getancestors(self, name, node):
self._fileservice.prefetch([(name, hex(node))], fetchdata=False,
fetchhistory=True)
self._fileservice.prefetch([(name, hex(node))], force=True,
fetchdata=False, fetchhistory=True)
return self._shared.getancestors(name, node)
def add(self, name, node, data):