aboutsummaryrefslogtreecommitdiffstats
path: root/plumbing/format/packfile
Commit message (Collapse)AuthorAgeFilesLines
* Expose Storage cache.kuba--2018-09-071-4/+3
| | | | Signed-off-by: kuba-- <kuba@sourced.tech>
* plumbing, storage: add bases to the common cacheJavi Fontan2018-08-222-0/+25
| | | | | | | | | | | | After clone only resolved deltas were added to the cache. This caused slowdowns in small repositories where most objects can be held in cache. It also makes packfiles reuse delta cache from the store. Previously it created a new delta cache each time a packfile object was created. This also slowed down a bit accessing objects and had an impact on memory consumption when bases are added to the cache. Signed-off-by: Javi Fontan <jfontan@gmail.com>
* plumbing/idxfile: object iterators returns entries in offset orderJavi Fontan2018-08-211-1/+1
| | | | | | | | | | | In the latest change the order was changed from offset order in packfiles to hash order. This makes reading all the objects not as efficient as before. It also created problems when the previous order was expected. Also added EntriesByOffset to indexes. Signed-off-by: Javi Fontan <jfontan@gmail.com>
* plumbing/packfile: do not compute sha1 for already undeltified objectsJavi Fontan2018-08-141-7/+9
| | | | Signed-off-by: Javi Fontan <jfontan@gmail.com>
* plumbing/pacfile: tidy up objectInfo structJavi Fontan2018-08-141-36/+22
| | | | | | | | * a new hasher is created when needed * delete unused fields * base content is no longer kept in memory Signed-off-by: Javi Fontan <jfontan@gmail.com>
* plumbing: add buffer cache and use it in packfile parserJavi Fontan2018-08-141-14/+10
| | | | | | | It uses less memory and is faster as slices don't have to be converted from/to MemoryObject and they are indexed by offset. Signed-off-by: Javi Fontan <jfontan@gmail.com>
* plumbing: packfile, open and close packfile on FSObject readsMiguel Molina2018-08-095-56/+126
| | | | Signed-off-by: Miguel Molina <miguel@erizocosmi.co>
* plumbing: packfile, rename DiskObject to FSObjectMiguel Molina2018-08-092-15/+15
| | | | Signed-off-by: Miguel Molina <miguel@erizocosmi.co>
* plumbing: packfile, read object content only onceMiguel Molina2018-08-092-7/+40
| | | | Signed-off-by: Miguel Molina <miguel@erizocosmi.co>
* plumbing: packfile, add Parse benchmarkMiguel Molina2018-08-091-0/+30
| | | | Signed-off-by: Miguel Molina <miguel@erizocosmi.co>
* plumbing: packfile, allow non-seekable sources on ParserMiguel Molina2018-08-085-177/+226
| | | | Signed-off-by: Miguel Molina <miguel@erizocosmi.co>
* *: use parser to populate non writable storages and bug fixesMiguel Molina2018-08-079-1156/+489
| | | | Signed-off-by: Miguel Molina <miguel@erizocosmi.co>
* Merge pull request #907 from erizocosmico/feature/fix-testsMiguel Molina2018-08-014-34/+98
|\ | | | | | | | | plumbing: packfile, fix package tests Signed-off-by: Miguel Molina <miguel@erizocosmi.co>
| * plumbing: packfile, fix package testsMiguel Molina2018-07-303-32/+70
|/ | | | Signed-off-by: Miguel Molina <miguel@erizocosmi.co>
* plumbing/packfile: add index generation to decoderJavi Fontan2018-07-271-7/+25
| | | | Signed-off-by: Javi Fontan <jfontan@gmail.com>
* plumbing: packfile, lazy object reads with DiskObjectsMiguel Molina2018-07-274-27/+293
| | | | Signed-off-by: Miguel Molina <miguel@erizocosmi.co>
* plumbing: packfile, new Packfile representationMiguel Molina2018-07-265-154/+418
| | | | Signed-off-by: Miguel Molina <miguel@erizocosmi.co>
* plumbing, storage: integrate new indexJavi Fontan2018-07-262-9/+11
| | | | | | Now dotgit.PackWriter uses the new packfile.Parser and index. Signed-off-by: Javi Fontan <jfontan@gmail.com>
* plumbing/packfile: preallocate memory in PatchDeltaJavi Fontan2018-07-261-1/+1
| | | | Signed-off-by: Javi Fontan <jfontan@gmail.com>
* plumbing/packfile: disable lookup by offsetJavi Fontan2018-07-261-8/+9
| | | | | | | In one case it disables the cache and the other disables lookup when the scanner is not seekable. Could be added back later. Signed-off-by: Javi Fontan <jfontan@gmail.com>
* plumbing/packfile: add new packfile parserJavi Fontan2018-07-262-0/+498
| | | | Signed-off-by: Javi Fontan <jfontan@gmail.com>
* plumbing/format/idxfile: add new Index and MemoryIndexMiguel Molina2018-07-192-135/+17
| | | | Signed-off-by: Miguel Molina <miguel@erizocosmi.co>
* packfile: optimise NewIndexFromIdxFile for a very common caseDavid Symonds2018-06-211-2/+12
| | | | | | | Loading from an on-disk idxfile will usually already have the idxfile entries in order, so check that before wasting time on sorting. Signed-off-by: David Symonds <dsymonds@golang.org>
* plumbing: packfile, Don't push empty objects. Fixes #840kuba--2018-06-072-4/+19
| | | | Signed-off-by: kuba-- <kuba@sourced.tech>
* packfile: improve Index memory representation to be more compactDavid Symonds2018-05-302-23/+67
| | | | | | | | | | | | | | | | | | Instead of using a map for offset indexing, use a sorted slice. Binary searching is fast, and a slice is much more compact. This has a negligible hit on speed, but has a significant impact on memory usage, especially for larger repos. benchmark old ns/op new ns/op delta BenchmarkIndexConstruction-12 15506506 14056098 -9.35% benchmark old allocs new allocs delta BenchmarkIndexConstruction-12 60764 60385 -0.62% benchmark old bytes new bytes delta BenchmarkIndexConstruction-12 4318145 3913169 -9.38% Signed-off-by: David Symonds <dsymonds@golang.org>
* *: Use CheckClose with named returnsJavi Fontan2018-03-272-4/+4
| | | | | | | | Previously some close errors were losts. This is specially problematic in go-git as lots of work is done here like generating indexes and moving packfiles. Signed-off-by: Javi Fontan <jfontan@gmail.com>
* *: skip time consuming testsMáximo Cuadros2018-03-211-0/+9
| | | | Signed-off-by: Máximo Cuadros <mcuadros@gmail.com>
* plumbing: format/packfile, add SaveOriginalMetadata functionJavi Fontan2018-02-092-5/+9
| | | | Signed-off-by: Javi Fontan <jfontan@gmail.com>
* plumbing: format/packfile, fix panic retrieving object hash.Javi Fontan2018-02-093-4/+10
| | | | | | | | | | | | | | | | In some cases the original data is not saved before it is cleaned and forces a panic when it's needed. The change adds ObjectToPack.CleanOriginal to be used to clean original object instead of: object.Original = nil Now when the Original data is freed because it's no longer in the pack window a SetOriginal call is done to make sure that Size, Hash and Size data is not lost. Signed-off-by: Javi Fontan <jfontan@gmail.com>
* plumbing: format/packfile, check nil objects in ObjectToPackJavi Fontan2018-01-252-8/+12
| | | | | | | SetOriginal now skips setting resolved values if the provided object is nil. BackToOriginal also skips nil Original objects. Signed-off-by: Javi Fontan <jfontan@gmail.com>
* plumbing: format/packfile, fix crash with cycle deltasJavi Fontan2018-01-244-1/+51
| | | | | | | | | | | | Resolving cycles relied on ObjectToPack objects having Original. This is no longer true with the changes from #720. This commit changes: * Save original type, hash and size in ObjectToPack * Use SetObject to set both Original and resolved type, hash and size * Restore original object before using BackToOriginal (cycle resolution) * Update encoder test to check this case Signed-off-by: Javi Fontan <jfontan@gmail.com>
* plumbing: packfile, Add crc check to scanner test.Javi Fontan2018-01-211-4/+75
| | | | Signed-off-by: Javi Fontan <jfontan@gmail.com>
* plumbing: packfile, Add a buffer to crc writer.Javi Fontan2018-01-211-9/+31
| | | | | | | | | | | crc update with block smaller than 16 bytes uses a slower version of the function. ReadByte is heavily used by zlib inflate so most of the time crc is update byte by byte. A new Flush method is added to the scanner to flush this crc writer cache. It is only called when the Scanner reader is a teeReader. Signed-off-by: Javi Fontan <jfontan@gmail.com>
* Modify cache to delete more than one item to free spaceJavi Fontan2018-01-161-0/+2
| | | | | | | | | | | The previous version could only delete the oldest used object. If the object to cache was bigger than the space freed it could not be added. Also the decoder adds bases to the cache when they are needed. This change increases the speed creating indexes 2x. Signed-off-by: Javi Fontan <jfontan@gmail.com>
* Clean reconstructed objects outside pack windowJavi Fontan2018-01-111-13/+19
| | | | | | | | | | | Object walk reconstructs delta objects but these are not cleaned up after they got out the pack window. Without this change all reconstructed objects reside in memory. restoreOriginal call is moved before calling Size(). Now we can not guarantee that the object is already undeltified. Signed-off-by: Javi Fontan <javier@sourced.tech>
* Merge pull request #698 from jfontan/improvement/use-decoder-cacheMáximo Cuadros2017-12-202-17/+39
|\ | | | | plumbing: cache, enforce the use of cache in packfile decoder
| * Make DeltaBaseCache privateJavi Fontan2017-12-201-6/+13
| | | | | | | | Signed-off-by: Javi Fontan <jfontan@gmail.com>
| * Fix typo and documentation of NewDecoderForTypeJavi Fontan2017-12-201-3/+3
| | | | | | | | Signed-off-by: Javi Fontan <jfontan@gmail.com>
| * Enforce the use of cache in packfile decoderJavi Fontan2017-12-202-12/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Decoder object can make use of an object cache to speed up processing. Previously the only way to specify it was changing manually the struct generated by NewDecodeForFile. This lead to some instances to be created without it and penalized performance. Now the cache should be explicitly passed to the constructor function. NewDecoder now creates objects with a cache using the default size. A new helper function was added to create cache objects with the default size as this becomes a common task now: cache.NewObjectLRUDefault() Signed-off-by: Javi Fontan <jfontan@gmail.com>
* | Improve delta reutilizationAntonio Jesus Navarro Perez2017-12-205-29/+152
|/ | | | | | | | | | | - Remove wrong 'if' on delta selector that causes poor delta reutilizations - packfile.Encoder now can write deltas and objects in a non specific order - ObjectToPack now saves the Offset on the packfile to be able to obtain base offset in a recursive manner and write them before the delta itself - Added encoder test to check cyclic delta chains - Check the output packfile hash in all encoder tests Signed-off-by: Antonio Jesus Navarro Perez <antnavper@gmail.com>
* all: gofmt -sferhat elmas2017-11-302-2/+2
|
* all: simplificationferhat elmas2017-11-294-14/+5
| | | | | | | | | | - no length for map initialization - don't check for boolean/error return - don't format string - use string method of bytes buffer instead of converting bytes to string - use `strings.Contains` instead of `strings.Index` - use `bytes.Equal` instead of `bytes.Compare`
* update to go-billy.v4 and go-git-fixtures.v3Máximo Cuadros2017-11-234-6/+5
| | | | Signed-off-by: Máximo Cuadros <mcuadros@gmail.com>
* format: packfile fix DecodeObjectAt when Decoder has typeMáximo Cuadros2017-11-192-4/+29
| | | | Signed-off-by: Máximo Cuadros <mcuadros@gmail.com>
* Merge pull request #631 from keybase/strib/use-bytes-pool-for-diffsMáximo Cuadros2017-10-311-4/+13
|\ | | | | packfile: use buffer pool for diffs
| * packfile: use buffer pool for diffsJeremy Stribling2017-10-301-4/+13
| |
* | packfile: delete index maps from memory when no longer neededJeremy Stribling2017-10-301-0/+6
|/ | | | This helps keep memory usage stable while calculating deltas.
* config: support a configurable, and turn-off-able, pack.windowJeremy Stribling2017-09-115-37/+89
| | | | | | | | | | | | | | | | | | | | One use of go-git is to transfer git data from a non-standard git repo (not stored in a file system, for example) to a "remote" backed by a standard, local .git repo. In this scenario, delta compression is not needed to reduce transfer time over the "network", because there is no network. The underlying storage layer has already taken care of the data tranfer, and sending the objects to local .git storage doesn't require compression. So this PR gives the user the option to turn off compression when it isn't needed. Of course, this results in a larger, uncompressed local .git repo, but the user can then run git gc or git repack on that repo if they care about the storage costs. Turning the pack window to 0 on reduces total push time of a 36K repo by 50 seconds (out of a pre-PR total of 3m26s).
* packfile: small optimizations for findMatch and matchLengthMiguel Molina2017-09-072-16/+38
| | | | Signed-off-by: Miguel Molina <miguel@erizocosmi.co>
* packfile: parallelize deltification of objects in groupsMiguel Molina2017-09-072-21/+31
| | | | Signed-off-by: Miguel Molina <miguel@erizocosmi.co>