| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| |
| |
| |
| | |
Signed-off-by: Antonio Jesus Navarro Perez <antnavper@gmail.com>
|
|/
|
|
| |
Signed-off-by: kuba-- <kuba@sourced.tech>
|
|
|
|
|
|
|
|
|
|
|
|
| |
After clone only resolved deltas were added to the cache. This caused
slowdowns in small repositories where most objects can be held in cache.
It also makes packfiles reuse delta cache from the store. Previously it
created a new delta cache each time a packfile object was created. This
also slowed down a bit accessing objects and had an impact on memory
consumption when bases are added to the cache.
Signed-off-by: Javi Fontan <jfontan@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
| |
In the latest change the order was changed from offset order in
packfiles to hash order. This makes reading all the objects not as
efficient as before. It also created problems when the previous order
was expected.
Also added EntriesByOffset to indexes.
Signed-off-by: Javi Fontan <jfontan@gmail.com>
|
|
|
|
|
|
| |
Fixes #923
Signed-off-by: Fedor Korotkov <fedor.korotkov@gmail.com>
|
|
|
|
| |
Signed-off-by: Javi Fontan <jfontan@gmail.com>
|
|
|
|
|
|
|
|
| |
* a new hasher is created when needed
* delete unused fields
* base content is no longer kept in memory
Signed-off-by: Javi Fontan <jfontan@gmail.com>
|
|
|
|
|
|
|
| |
It uses less memory and is faster as slices don't have to be converted
from/to MemoryObject and they are indexed by offset.
Signed-off-by: Javi Fontan <jfontan@gmail.com>
|
|
|
|
| |
Signed-off-by: Miguel Molina <miguel@erizocosmi.co>
|
|
|
|
| |
Signed-off-by: Miguel Molina <miguel@erizocosmi.co>
|
|
|
|
| |
Signed-off-by: Miguel Molina <miguel@erizocosmi.co>
|
|
|
|
| |
Signed-off-by: Miguel Molina <miguel@erizocosmi.co>
|
|
|
|
| |
Signed-off-by: Miguel Molina <miguel@erizocosmi.co>
|
|
|
|
| |
Signed-off-by: Miguel Molina <miguel@erizocosmi.co>
|
|
|
|
| |
Signed-off-by: Miguel Molina <miguel@erizocosmi.co>
|
|\
| |
| |
| |
| | |
plumbing: packfile, fix package tests
Signed-off-by: Miguel Molina <miguel@erizocosmi.co>
|
|/
|
|
| |
Signed-off-by: Miguel Molina <miguel@erizocosmi.co>
|
|
|
|
| |
Signed-off-by: Javi Fontan <jfontan@gmail.com>
|
|
|
|
| |
Signed-off-by: Javi Fontan <jfontan@gmail.com>
|
|
|
|
| |
Signed-off-by: Miguel Molina <miguel@erizocosmi.co>
|
|
|
|
| |
Signed-off-by: Miguel Molina <miguel@erizocosmi.co>
|
|
|
|
|
|
| |
Now dotgit.PackWriter uses the new packfile.Parser and index.
Signed-off-by: Javi Fontan <jfontan@gmail.com>
|
|
|
|
|
|
| |
Index is also automatically generated when OnFooter is called.
Signed-off-by: Javi Fontan <jfontan@gmail.com>
|
|
|
|
|
|
|
| |
This functionality may be moved elsewhere in the future but is needed
now to fit filesystem.ObjectStorage and the new index.
Signed-off-by: Javi Fontan <jfontan@gmail.com>
|
|
|
|
| |
Signed-off-by: Javi Fontan <jfontan@gmail.com>
|
|
|
|
| |
Signed-off-by: Javi Fontan <jfontan@gmail.com>
|
|
|
|
| |
Signed-off-by: Javi Fontan <jfontan@gmail.com>
|
|
|
|
| |
Signed-off-by: Javi Fontan <jfontan@gmail.com>
|
|
|
|
|
|
|
|
|
| |
It's still not complete:
* 64 bit offsets
* IdxChecksum
Signed-off-by: Javi Fontan <jfontan@gmail.com>
|
|
|
|
|
|
|
| |
In one case it disables the cache and the other disables lookup when
the scanner is not seekable. Could be added back later.
Signed-off-by: Javi Fontan <jfontan@gmail.com>
|
|
|
|
| |
Signed-off-by: Javi Fontan <jfontan@gmail.com>
|
|
|
|
| |
Signed-off-by: Miguel Molina <miguel@erizocosmi.co>
|
|
|
|
|
|
|
| |
Loading from an on-disk idxfile will usually already have the idxfile
entries in order, so check that before wasting time on sorting.
Signed-off-by: David Symonds <dsymonds@golang.org>
|
|
|
|
| |
Signed-off-by: kuba-- <kuba@sourced.tech>
|
|\
| |
| | |
packfile: improve Index memory representation to be more compact
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Instead of using a map for offset indexing, use a sorted slice.
Binary searching is fast, and a slice is much more compact.
This has a negligible hit on speed, but has a significant impact on
memory usage, especially for larger repos.
benchmark old ns/op new ns/op delta
BenchmarkIndexConstruction-12 15506506 14056098 -9.35%
benchmark old allocs new allocs delta
BenchmarkIndexConstruction-12 60764 60385 -0.62%
benchmark old bytes new bytes delta
BenchmarkIndexConstruction-12 4318145 3913169 -9.38%
Signed-off-by: David Symonds <dsymonds@golang.org>
|
|/
|
|
|
|
|
| |
This makes all the required Entry allocations in one go,
instead of huge amounts of small individual allocations.
Signed-off-by: David Symonds <dsymonds@golang.org>
|
|
|
| |
Worktree: Provide ability to add excludes
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The canonical Git client successfully decodes sideband packets up to
65524 bytes in length (4-byte header + 65520-byte payload). The Git
protocol documentation was updated in August 2016 to reduce the maximum
payload size to 65516 bytes, however old implementations still exist in
the wild emitting 65520-byte payloads.
As there is no technical difficulty with accepting (not emitting) larger
payload sizes, this change adjusts the limit check to allow successful
decoding of packets up to 65524 bytes. This change increases
compatibility with the current canonical Git implementation.
Doc changes from August 2016:
https://github.com/git/git/commit/7841c4801ce51f1f62d376d164372e8677c6bc94#diff-52695c8fe91b78b70cea44562ae28297L67
Current packet buffer size is still LARGE_PACKET_MAX (+1 null):
https://github.com/git/git/blob/468165c1d8a442994a825f3684528361727cd8c0/sideband.c#L24
https://github.com/git/git/blob/468165c1d8a442994a825f3684528361727cd8c0/sideband.c#L36
LARGE_PACKET_MAX definition:
https://github.com/git/git/blob/468165c1d8a442994a825f3684528361727cd8c0/pkt-line.h#L100
Signed-off-by: Joseph Vusich <jvusich@amazon.com>
|
|
|
|
|
|
|
|
| |
Previously some close errors were losts. This is specially problematic
in go-git as lots of work is done here like generating indexes and
moving packfiles.
Signed-off-by: Javi Fontan <jfontan@gmail.com>
|
|
|
|
| |
Signed-off-by: Máximo Cuadros <mcuadros@gmail.com>
|
|\
| |
| | |
new methods Worktree.[AddGlob|RemoveBlob] and recursive Worktree.[Add|Remove]
|
| |
| |
| |
| | |
Signed-off-by: Máximo Cuadros <mcuadros@gmail.com>
|
| |
| |
| |
| |
| |
| |
| | |
this reuses an existing patch, setting context to 6 triggers the
bug, becuase of a 5-line trailing equals chunk.
Signed-off-by: Mechiel Lukkien <mechiel@ueber.net>
|
| |
| |
| |
| | |
Signed-off-by: Mechiel Lukkien <mechiel@ueber.net>
|
| |
| |
| |
| | |
Signed-off-by: Javi Fontan <jfontan@gmail.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In some cases the original data is not saved before it is cleaned
and forces a panic when it's needed.
The change adds ObjectToPack.CleanOriginal to be used to clean original
object instead of:
object.Original = nil
Now when the Original data is freed because it's no longer in the pack
window a SetOriginal call is done to make sure that Size, Hash and Size
data is not lost.
Signed-off-by: Javi Fontan <jfontan@gmail.com>
|
|
|
|
|
|
|
| |
SetOriginal now skips setting resolved values if the provided
object is nil. BackToOriginal also skips nil Original objects.
Signed-off-by: Javi Fontan <jfontan@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Resolving cycles relied on ObjectToPack objects having Original. This
is no longer true with the changes from #720. This commit changes:
* Save original type, hash and size in ObjectToPack
* Use SetObject to set both Original and resolved type, hash and size
* Restore original object before using BackToOriginal (cycle resolution)
* Update encoder test to check this case
Signed-off-by: Javi Fontan <jfontan@gmail.com>
|
|
|
|
| |
Signed-off-by: Javi Fontan <jfontan@gmail.com>
|