| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
Signed-off-by: ferhat elmas <elmas.ferhat@gmail.com>
|
|
|
|
| |
Signed-off-by: Máximo Cuadros <mcuadros@gmail.com>
|
|\
| |
| | |
Add a setRef and rewritePackedRefsWhileLocked versions that supports non rw fs
|
| |
| |
| |
| | |
Signed-off-by: Javi Fontan <jfontan@gmail.com>
|
| |
| |
| |
| | |
Signed-off-by: Javi Fontan <jfontan@gmail.com>
|
| |
| |
| |
| | |
Signed-off-by: Javi Fontan <jfontan@gmail.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
There are some filesystems that do not support opening the files in read
and write modes at the same time. The method SetRef is split in files with
an extra version that only writes the reference. It can be activated with
-tags norwfs on building.
Signed-off-by: Javi Fontan <jfontan@gmail.com>
|
| |
| |
| |
| | |
Signed-off-by: Javi Fontan <jfontan@gmail.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Decoder object can make use of an object cache to speed up processing.
Previously the only way to specify it was changing manually the struct
generated by NewDecodeForFile. This lead to some instances to be created
without it and penalized performance.
Now the cache should be explicitly passed to the constructor function.
NewDecoder now creates objects with a cache using the default size.
A new helper function was added to create cache objects with the default
size as this becomes a common task now:
cache.NewObjectLRUDefault()
Signed-off-by: Javi Fontan <jfontan@gmail.com>
|
|
|
|
|
|
| |
- no unnecessary err/bool check, uses them directly
Signed-off-by: ferhat elmas <elmas.ferhat@gmail.com>
|
|
|
|
| |
This change adds a new method Alternates() in DotGit to check and
query alternate source.
|
| |
|
|\
| |
| | |
all: gofmt -s
|
| | |
|
| |
| |
| |
| |
| |
| | |
Windows file system doesn't let us rename over a file while holding
that file's lock, so use rewrite as a last resort. It could result in
a partially-written file, if there's a failure at the wrong time.
|
| |
| |
| |
| | |
Windows doesn't like it when we re-open a file we already have locked.
|
| |
| |
| |
| |
| |
| | |
Suggested by mcuadros.
Issue: #669
|
| |
| |
| |
| |
| |
| |
| | |
This allows the user to check whether an object exists, without
reading all the object data from storage.
Issue: KBFS-2445
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| | |
Suggested by taruti.
Issue: #13
|
| |
| |
| |
| |
| |
| | |
The file could have been completely replaced while waiting for the
lock, so we need to re-open, otherwise we might be reading a stale
file that has already been deleted/overwritten.
|
| |
| |
| |
| | |
Issue: KBFS-2517
|
| | |
|
|/
|
|
|
|
|
|
|
|
| |
Currently this implementation is only valid for kbfsgit, since it
assumes some things about the filesystem not being updated during the
packing, and about conflict resolution rules. In the future, it would
be nice to replace this with a more general one, and move this
kbfsgit-optimized implementation into kbfsgit.
Issue: KBFS-2517
|
|
|
|
|
|
|
|
|
|
| |
- no length for map initialization
- don't check for boolean/error return
- don't format string
- use string method of bytes buffer instead of converting bytes to
string
- use `strings.Contains` instead of `strings.Index`
- use `bytes.Equal` instead of `bytes.Compare`
|
| |
|
|\ |
|
| |
| |
| |
| | |
Restore the `seen` map that avoided listing packed-refs twice.
|
| | |
|
| |
| |
| |
| | |
Issue: KBFS-2509
|
| | |
|
|/ |
|
| |
|
|
|
|
| |
Signed-off-by: Máximo Cuadros <mcuadros@gmail.com>
|
|
|
|
| |
Signed-off-by: Miguel Molina <miguel@erizocosmi.co>
|
|
|
|
| |
Signed-off-by: Miguel Molina <miguel@erizocosmi.co>
|
|
|
|
|
|
| |
Now there's only two ways of getting a reference, by checking under refs/ directory or in packed-refs. refs/ directory is checked using a direct read by reference name and packed refs are cached until they have been changed.
Signed-off-by: Miguel Molina <miguel@erizocosmi.co>
|
|
|
|
| |
Signed-off-by: Miguel Molina <miguel@erizocosmi.co>
|
|\
| |
| | |
config: multiple values in RemoteConfig (URLs and Fetch)
|
| |
| |
| |
| |
| |
| |
| |
| | |
* Change `URL string` to `URL []string` in `RemoteConfig`, since
git allows multiple URLs per remote. See:
http://marc.info/?l=git&m=116231242118202&w=2
* Fix marshalling of multiple fetch refspecs.
|
|\ \
| |/
|/| |
storage: reuse deltas from packfiles
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* plumbing: add DeltaObject interface for EncodedObjects that
are deltas and hold additional information about them, such
as the hash of the base object.
* plumbing/storer: add DeltaObjectStorer interface for object
storers that can return DeltaObject. Note that calls to
EncodedObject will never return instances of DeltaObject.
That requires explicit calls to DeltaObject.
* storage/filesystem: implement DeltaObjectStorer interface.
* plumbing/packfile: packfile encoder now supports reusing
deltas that are already computed (e.g. from an existing
packfile) if the storage implements DeltaObjectStorer.
Reusing deltas boosts performance of packfile generation
(e.g. on push).
|
|/ |
|
| |
|
|
|
|
|
| |
Reuse delta base object cache for packfile decoders
across multiple instances.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There was an internal type (i.e. storage/filesystem.idx) to
use as in-memory index for packfiles. This was not convenient
to reuse in the packfile.
This commit creates a new representation (format/packfile.Index)
that can be converted to and from idxfile.Idxfile.
A packfile.Index now contains the functionality that was scattered
on storage/filesystem.idx and packfile.Decoder's internals.
storage/filesystem now reuses packfile.Index instances and this
also results in higher cache hit ratios when resolving deltas.
|
|\
| |
| | |
*: add more IO error checks
|
| | |
|