aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--doc/bug-graph-1.pngbin0 -> 56863 bytes
-rw-r--r--doc/merge1.pngbin0 -> 56989 bytes
-rw-r--r--doc/merge2.pngbin0 -> 65479 bytes
-rw-r--r--doc/model.md160
-rw-r--r--doc/operations.pngbin0 -> 12842 bytes
-rw-r--r--entity/dag/entity.go2
-rw-r--r--entity/dag/example_test.go383
-rw-r--r--query/parser_test.go4
8 files changed, 476 insertions, 73 deletions
diff --git a/doc/bug-graph-1.png b/doc/bug-graph-1.png
new file mode 100644
index 00000000..5f6f931f
--- /dev/null
+++ b/doc/bug-graph-1.png
Binary files differ
diff --git a/doc/merge1.png b/doc/merge1.png
new file mode 100644
index 00000000..7ba24173
--- /dev/null
+++ b/doc/merge1.png
Binary files differ
diff --git a/doc/merge2.png b/doc/merge2.png
new file mode 100644
index 00000000..614be5e8
--- /dev/null
+++ b/doc/merge2.png
Binary files differ
diff --git a/doc/model.md b/doc/model.md
index c4252e6c..da76761c 100644
--- a/doc/model.md
+++ b/doc/model.md
@@ -1,111 +1,127 @@
-# Data model
+Entities data model
+===================
If you are not familiar with [git internals](https://git-scm.com/book/en/v1/Git-Internals), you might first want to read about them, as the `git-bug` data model is built on top of them.
-The biggest problem when creating a distributed bug tracker is that there is no central authoritative server (doh!). This implies some constraints.
+## Entities (bugs, ...) are a series of edit operations
-## Anybody can create and edit bugs at the same time as you
+As entities are stored and edited in multiple process at the same time, it's not possible to store the current state like it would be done in a normal application. If two process change the same entity and later try to merge the states, we wouldn't know which change takes precedence or how to merge those states.
-To deal with this problem, you need a way to merge these changes in a meaningful way.
+To deal with this problem, you need a way to merge these changes in a meaningful way. Instead of storing the final bug data directly, we store a series of edit `Operation`s. This is a common idea, notably with [Operation-based CRDTs](https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type#Operation-based_CRDTs).
-Instead of storing the final bug data directly, we store a series of edit `Operation`s.
+![ordered operations](operations.png)
-Note: In git-bug internally it is a golang struct, but in the git repo it is stored as JSON, as seen later.
+To get the final state of an entity, we apply these `Operation`s in the correct order on an empty state to compute ("compile") our view.
-These `Operation`s are aggregated in an `OperationPack`, a simple array. An `OperationPack` represents an edit session of a bug. We store this pack in git as a git `Blob`; that consists of a string containing a JSON array of operations. One such pack -- here with two operations -- might look like this:
+## Entities are stored in git objects
+
+An `Operation` is a piece of data including:
+- a type identifier
+- an author (a reference to another entity)
+- a timestamp (there is also 1 or 2 Lamport time that we will describe later)
+- all the data required by that operation type (a message, a status ...)
+- a random nonce to ensure we have enough entropy, as the operation identifier is a hash of that data (more on that later)
+
+These `Operation`s are aggregated in an `OperationPack`, a simple array. An `OperationPack` represents an edit session of a bug. As the operation's author is the same for all the `OperationPack` we only store it once.
+
+We store this pack in git as a git `Blob`; that consists of a string containing a JSON array of operations. One such pack -- here with two operations -- might look like this:
```json
-[
- {
- "type": "SET_TITLE",
- "author": {
- "id": "5034cd36acf1a2dadb52b2db17f620cc050eb65c"
- },
- "timestamp": 1533640589,
- "title": "This title is better"
+{
+ "author": {
+ "id": "04bf6c1a69bb8e9679644874c85f82e337b40d92df9d8d4176f1c5e5c6627058"
},
- {
- "type": "ADD_COMMENT",
- "author": {
- "id": "5034cd36acf1a2dadb52b2db17f620cc050eb65c"
+ "ops": [
+ {
+ "type": 3,
+ "timestamp": 1647377254,
+ "nonce": "SRQwUWTJCXAmQBIS+1ctKgOcbF0=",
+ "message": "Adding a comment",
+ "files": null
},
- "timestamp": 1533640612,
- "message": "A new comment"
- }
-]
+ {
+ "type": 4,
+ "timestamp": 1647377257,
+ "nonce": "la/HaRPMvD77/cJSJOUzKWuJdY8=",
+ "status": 1
+ }
+ ]
+}
```
-To reference our `OperationPack`, we create a git `Tree`; it references our `OperationPack` `Blob` under `"\ops"`. If any edit operation includes a media (for instance in a message), we can store that media as a `Blob` and reference it here under `"/media"`.
+To reference our `OperationPack`, we create a git `Tree`; it references our `OperationPack` `Blob` under `"/ops"`. If any edit operation includes a media (for instance in a message), we can store that media as a `Blob` and reference it here under `"/media"`.
To complete the picture, we create a git `Commit` that references our `Tree`. Each time we add more `Operation`s to our bug, we add a new `Commit` with the same data-structure to form a chain of `Commit`s.
This chain of `Commit`s is made available as a git `Reference` under `refs/bugs/<bug-id>`. We can later use this reference to push our data to a git remote. As git will push any data needed as well, everything will be pushed to the remote, including the media.
-For convenience and performance, each `Tree` references the very first `OperationPack` of the bug under `"/root"`. That way we can easily access the very first `Operation`, the `CREATE` operation. This operation contains important data for the bug, like the author.
-
Here is the complete picture:
-```
- refs/bugs/<bug-id>
- |
- |
- |
- +-----------+ +-----------+ "ops" +-----------+
- | Commit |----------> Tree |---------+------------| Blob | (OperationPack)
- +-----------+ +-----------+ | +-----------+
- | |
- | |
- | | "root" +-----------+
- +-----------+ +-----------+ +------------| Blob | (OperationPack)
- | Commit |----------> Tree |-- ... | +-----------+
- +-----------+ +-----------+ |
- | |
- | | "media" +-----------+ +-----------+
- | +------------| Tree |---+--->| Blob | bug.jpg
- +-----------+ +-----------+ +-----------+ | +-----------+
- | Commit |----------> Tree |-- ... |
- +-----------+ +-----------+ | +-----------+
- +--->| Blob | demo.mp4
- +-----------+
-```
-
-Now that we have this, we can easily merge our bugs without conflict. When pulling bug updates from a remote, we will simply add our new operations (that is, new `Commit`s), if any, at the end of the chain. In git terms, it's just a `rebase`.
+![git graph of a simple bug](bug-graph-1.png)
-## You can't have a simple consecutive index for your bugs
+## Time is unreliable
-The same way git can't have a simple counter as identifier for its commits as SVN does, we can't have consecutive identifiers for bugs.
+It would be very tempting to use the `Operation`'s timestamp to give us the order to compile the final state. However, you can't rely on the time provided by other people (their clock might be off) for anything other than just display. This is a fundamental limitation of distributed system, and even more so when actors might want to game the system.
-`git-bug` uses as identifier the hash of the first commit in the chain of commits of the bug. As this hash is ultimately computed with the content of the `CREATE` operation that includes title, message and a timestamp, it will be unique and prevent collision.
+Instead, we are going to use [Lamport logical clock](https://en.wikipedia.org/wiki/Lamport_timestamps). A Lamport clock is a simple counter of events. This logical clock gives us a partial ordering:
+- if L1 < L2, L1 happened before L2
+- if L1 > L2, L1 happened after L2
+- if L1 == L2, we can't tell which happened first: it's a concurrent edition
-The same way as git does, this hash is displayed truncated to a 7 characters string to a human user. Note that when specifying a bug id in a command, you can enter as few character as you want, as long as there is no ambiguity. If multiple bugs match your prefix, `git-bug` will complain and display the potential matches.
-## You can't rely on the time provided by other people (their clock might by off) for anything other than just display
+Each time we are appending something to the data (create an Entity, add an `Operation`) a logical time will be attached, with the highest time value we are aware of plus one. This declares a causality in the event and allows ordering entities and operations.
-When in the context of a single bug, events are already ordered without the need of a timestamp. An `OperationPack` is an ordered array of operations. A chain of commits orders `OperationPack`s amongst each other.
+The first commit of an Entity will have both a creation time and edit time clock, while a later commit will only have an edit time clock. These clocks value are serialized directly in the `Tree` entry name (for example: `"create-clock-4"`). As a Tree entry needs to reference something, we reference the git `Blob` with an empty content. As all of these entries will reference the same `Blob`, no network transfer is needed as long as you already have any entity in your repository.
-Now, to be able to order bugs by creation or last edition time, `git-bug` uses a [Lamport logical clock](https://en.wikipedia.org/wiki/Lamport_timestamps). A Lamport clock is a simple counter of events. When a new bug is created, its creation time will be the highest time value we are aware of plus one. This declares a causality in the event and allows to order bugs.
-
-When bugs are pushed/pulled to a git remote, it might happen that bugs get the same logical time. This means that they were created or edited concurrently. In this case, `git-bug` will use the timestamp as a second layer of sorting. While the timestamp might be incorrect due to a badly set clock, the drift in sorting is bounded by the first sorting using the logical clock. That means that if users synchronize their bugs regularly, the timestamp will rarely be used, and should still provide a kinda accurate sorting when needed.
-
-These clocks are stored in the chain of commits of each bug, as entries in each main git `Tree`. The first commit will have both a creation time and edit time clock, while a later commit will only have an edit time clock. A naive way could be to serialize the clock in a git `Blob` and reference it in the `Tree` as `"create-clock"` for example. The problem is that it would generate a lot of blobs that would need to be exchanged later for what is basically just a number.
-
-Instead, the clock value is serialized directly in the `Tree` entry name (for example: `"create-clock-4"`). As a Tree entry needs to reference something, we reference the git `Blob` with an empty content. As all of these entries will reference the same `Blob`, no network transfer is needed as long as you already have any bug in your repository.
-
-
-Example of Tree of the first commit of a bug:
+Example of Tree of the first commit of an entity:
```
100644 blob e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 create-clock-14
100644 blob e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 edit-clock-137
100644 blob a020a85baa788e12699a4d83dd735578f0d78c75 ops
-100644 blob a020a85baa788e12699a4d83dd735578f0d78c75 root
```
Note that both `"ops"` and `"root"` entry reference the same OperationPack as it's the first commit in the chain.
-
-Example of Tree of a later commit of a bug:
+Example of Tree of a later commit of an entity:
```
100644 blob e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 edit-clock-154
100644 blob 68383346c1a9503f28eec888efd300e9fc179ca0 ops
-100644 blob a020a85baa788e12699a4d83dd735578f0d78c75 root
```
-Note that the `"root"` entry still references the same root `OperationPack`. Also, all the clocks reference the same empty `Blob`.
+
+## Entities and Operation's ID
+
+`Operation`s can be referenced in the data model or by users with an identifier. This identifier is computed from the `Operation`'s data itself, with a hash of that data: `id = hash(json(op))`
+
+For entities, `git-bug` uses as identifier the hash of the first `Operation` of the entity, as serialized on disk.
+
+The same way as git does, this hash is displayed truncated to a 7 characters string to a human user. Note that when specifying a bug id in a command, you can enter as few characters as you want, as long as there is no ambiguity. If multiple entities match your prefix, `git-bug` will complain and display the potential matches.
+
+## Entities support conflict resolution
+
+Now that we have all that, we can finally merge our entities without conflict and collaborate with other users. Let's start by getting rid of two simple scenario:
+- if we simply pull updates, we move forward our local reference. We get an update of our graph that we read as usual.
+- if we push fast-forward updates, we move forward the remote reference and other users can update their reference as well.
+
+The tricky part happens when we have concurrent edition. If we pull updates while we have local changes (non-straightforward in git term), git-bug create the equivalent of a merge commit to merge both branches into a DAG. This DAG has a single root containing the first operation, but can have branches that get merged back into a single head pointed by the reference.
+
+As we don't have a purely linear series of commits/`Operations`s, we need a deterministic ordering to always apply operations in the same order.
+
+git-bug apply the following algorithm:
+1. load and read all the commits and the associated `OperationPack`s
+2. make sure that the Lamport clocks respect the DAG structure: a parent commit/`OperationPack` (that is, towards the head) cannot have a clock that is higher or equal than its direct child. If such a problem happen, the commit is refused/discarded.
+3. individual `Operation`s are assembled together and ordered given the following priorities:
+ 1. the edition's lamport clock if not concurrent
+ 2. the lexicographic order of the `OperationPack`'s identifier
+
+Step 2 is providing and enforcing a constraint over the `Operation`'s logical clocks. What that means is that we inherit the implicit ordering given by the DAG. Later, logical clocks refine that ordering. This, coupled with signed commit has the nice property of limiting how this data model can be abused.
+
+Here is an example of such an ordering. We can see that:
+- Lamport clocks respect the DAG structure
+- the final `Operation` order is [A,B,C,D,E,F], according to those clocks
+
+![merge scenario 1](merge1.png)
+
+When we have a concurrent edition, we apply a secondary ordering based on the `OperationPack`'s identifier:
+
+![merge scenario 2](merge2.png)
+
+This secondary ordering doesn't carry much meaning, but it's unbiased and hard to abuse. \ No newline at end of file
diff --git a/doc/operations.png b/doc/operations.png
new file mode 100644
index 00000000..79b6c8e7
--- /dev/null
+++ b/doc/operations.png
Binary files differ
diff --git a/entity/dag/entity.go b/entity/dag/entity.go
index 0760cdec..f3229b7e 100644
--- a/entity/dag/entity.go
+++ b/entity/dag/entity.go
@@ -205,7 +205,7 @@ func read(def Definition, repo repository.ClockedRepo, resolver identity.Resolve
if oppSlice[i].EditTime != oppSlice[j].EditTime {
return oppSlice[i].EditTime < oppSlice[j].EditTime
}
- // We have equal EditTime, which means we have concurrent edition over different machines and we
+ // We have equal EditTime, which means we have concurrent edition over different machines, and we
// can't tell which one came first. So, what now? We still need a total ordering and the most stable possible.
// As a secondary ordering, we can order based on a hash of the serialized Operations in the
// operationPack. It doesn't carry much meaning but it's unbiased and hard to abuse.
diff --git a/entity/dag/example_test.go b/entity/dag/example_test.go
new file mode 100644
index 00000000..d034e59d
--- /dev/null
+++ b/entity/dag/example_test.go
@@ -0,0 +1,383 @@
+package dag_test
+
+import (
+ "encoding/json"
+ "fmt"
+ "os"
+
+ "github.com/MichaelMure/git-bug/entity"
+ "github.com/MichaelMure/git-bug/entity/dag"
+ "github.com/MichaelMure/git-bug/identity"
+ "github.com/MichaelMure/git-bug/repository"
+)
+
+// This file explains how to define a replicated data structure, stored and using git as a medium for
+// synchronisation. To do this, we'll use the entity/dag package, which will do all the complex handling.
+//
+// The example we'll use here is a small shared configuration with two fields. One of them is special as
+// it also defines who is allowed to change said configuration. Note: this example is voluntarily a bit
+// complex with operation linking to identities and logic rules, to show that how something more complex
+// than a toy would look like. That said, it's still a simplified example: in git-bug for example, more
+// layers are added for caching, memory handling and to provide an easier to use API.
+//
+// Let's start by defining the document/structure we are going to share:
+
+// Snapshot is the compiled view of a ProjectConfig
+type Snapshot struct {
+ // Administrator is the set of users with the higher level of access
+ Administrator map[identity.Interface]struct{}
+ // SignatureRequired indicate that all git commit need to be signed
+ SignatureRequired bool
+}
+
+// HasAdministrator returns true if the given identity is included in the administrator.
+func (snap *Snapshot) HasAdministrator(i identity.Interface) bool {
+ for admin, _ := range snap.Administrator {
+ if admin.Id() == i.Id() {
+ return true
+ }
+ }
+ return false
+}
+
+// Now, we will not edit this configuration directly. Instead, we are going to apply "operations" on it.
+// Those are the ones that will be stored and shared. Doing things that way allow merging concurrent editing
+// and deal with conflict.
+//
+// Here, we will define three operations:
+// - SetSignatureRequired is a simple operation that set or unset the SignatureRequired boolean
+// - AddAdministrator is more complex and add a new administrator in the Administrator set
+// - RemoveAdministrator is the counterpart the remove administrators
+//
+// Note: there is some amount of boilerplate for operations. In a real project, some of that can be
+// factorized and simplified.
+
+// Operation is the operation interface acting on Snapshot
+type Operation interface {
+ dag.Operation
+
+ // Apply the operation to a Snapshot to create the final state
+ Apply(snapshot *Snapshot)
+}
+
+type OperationType int
+
+const (
+ _ OperationType = iota
+ SetSignatureRequiredOp
+ AddAdministratorOp
+ RemoveAdministratorOp
+)
+
+// SetSignatureRequired is an operation to set/unset if git signature are required.
+type SetSignatureRequired struct {
+ author identity.Interface
+ OperationType OperationType `json:"type"`
+ Value bool `json:"value"`
+}
+
+func NewSetSignatureRequired(author identity.Interface, value bool) *SetSignatureRequired {
+ return &SetSignatureRequired{author: author, OperationType: SetSignatureRequiredOp, Value: value}
+}
+
+func (ssr *SetSignatureRequired) Id() entity.Id {
+ // the Id of the operation is the hash of the serialized data.
+ // we could memorize the Id when deserializing, but that will do
+ data, _ := json.Marshal(ssr)
+ return entity.DeriveId(data)
+}
+
+func (ssr *SetSignatureRequired) Validate() error {
+ if ssr.author == nil {
+ return fmt.Errorf("author not set")
+ }
+ return ssr.author.Validate()
+}
+
+func (ssr *SetSignatureRequired) Author() identity.Interface {
+ return ssr.author
+}
+
+// Apply is the function that makes changes on the snapshot
+func (ssr *SetSignatureRequired) Apply(snapshot *Snapshot) {
+ // check that we are allowed to change the config
+ if _, ok := snapshot.Administrator[ssr.author]; !ok {
+ return
+ }
+ snapshot.SignatureRequired = ssr.Value
+}
+
+// AddAdministrator is an operation to add a new administrator in the set
+type AddAdministrator struct {
+ author identity.Interface
+ OperationType OperationType `json:"type"`
+ ToAdd []identity.Interface `json:"to_add"`
+}
+
+// addAdministratorJson is a helper struct to deserialize identities with a concrete type.
+type addAdministratorJson struct {
+ ToAdd []identity.IdentityStub `json:"to_add"`
+}
+
+func NewAddAdministratorOp(author identity.Interface, toAdd ...identity.Interface) *AddAdministrator {
+ return &AddAdministrator{author: author, OperationType: AddAdministratorOp, ToAdd: toAdd}
+}
+
+func (aa *AddAdministrator) Id() entity.Id {
+ // we could memorize the Id when deserializing, but that will do
+ data, _ := json.Marshal(aa)
+ return entity.DeriveId(data)
+}
+
+func (aa *AddAdministrator) Validate() error {
+ // Let's enforce an arbitrary rule
+ if len(aa.ToAdd) == 0 {
+ return fmt.Errorf("nothing to add")
+ }
+ if aa.author == nil {
+ return fmt.Errorf("author not set")
+ }
+ return aa.author.Validate()
+}
+
+func (aa *AddAdministrator) Author() identity.Interface {
+ return aa.author
+}
+
+// Apply is the function that makes changes on the snapshot
+func (aa *AddAdministrator) Apply(snapshot *Snapshot) {
+ // check that we are allowed to change the config ... or if there is no admin yet
+ if !snapshot.HasAdministrator(aa.author) && len(snapshot.Administrator) != 0 {
+ return
+ }
+ for _, toAdd := range aa.ToAdd {
+ snapshot.Administrator[toAdd] = struct{}{}
+ }
+}
+
+// RemoveAdministrator is an operation to remove an administrator from the set
+type RemoveAdministrator struct {
+ author identity.Interface
+ OperationType OperationType `json:"type"`
+ ToRemove []identity.Interface `json:"to_remove"`
+}
+
+// removeAdministratorJson is a helper struct to deserialize identities with a concrete type.
+type removeAdministratorJson struct {
+ ToRemove []identity.Interface `json:"to_remove"`
+}
+
+func NewRemoveAdministratorOp(author identity.Interface, toRemove ...identity.Interface) *RemoveAdministrator {
+ return &RemoveAdministrator{author: author, OperationType: RemoveAdministratorOp, ToRemove: toRemove}
+}
+
+func (ra *RemoveAdministrator) Id() entity.Id {
+ // the Id of the operation is the hash of the serialized data.
+ // we could memorize the Id when deserializing, but that will do
+ data, _ := json.Marshal(ra)
+ return entity.DeriveId(data)
+}
+
+func (ra *RemoveAdministrator) Validate() error {
+ // Let's enforce some rules. If we return an error, this operation will be
+ // considered invalid and will not be included in our data.
+ if len(ra.ToRemove) == 0 {
+ return fmt.Errorf("nothing to remove")
+ }
+ if ra.author == nil {
+ return fmt.Errorf("author not set")
+ }
+ return ra.author.Validate()
+}
+
+func (ra *RemoveAdministrator) Author() identity.Interface {
+ return ra.author
+}
+
+// Apply is the function that makes changes on the snapshot
+func (ra *RemoveAdministrator) Apply(snapshot *Snapshot) {
+ // check if we are allowed to make changes
+ if !snapshot.HasAdministrator(ra.author) {
+ return
+ }
+ // special rule: we can't end up with no administrator
+ stillSome := false
+ for admin, _ := range snapshot.Administrator {
+ if admin != ra.author {
+ stillSome = true
+ break
+ }
+ }
+ if !stillSome {
+ return
+ }
+ // apply
+ for _, toRemove := range ra.ToRemove {
+ delete(snapshot.Administrator, toRemove)
+ }
+}
+
+// Now, let's create the main object (the entity) we are going to manipulate: ProjectConfig.
+// This object wrap a dag.Entity, which makes it inherit some methods and provide all the complex
+// DAG handling. Additionally, ProjectConfig is the place where we can add functions specific for that type.
+
+type ProjectConfig struct {
+ // this is really all we need
+ *dag.Entity
+}
+
+func NewProjectConfig() *ProjectConfig {
+ return &ProjectConfig{Entity: dag.New(def)}
+}
+
+// a Definition describes a few properties of the Entity, a sort of configuration to manipulate the
+// DAG of operations
+var def = dag.Definition{
+ Typename: "project config",
+ Namespace: "conf",
+ OperationUnmarshaler: operationUnmarshaller,
+ FormatVersion: 1,
+}
+
+// operationUnmarshaller is a function doing the de-serialization of the JSON data into our own
+// concrete Operations. If needed, we can use the resolver to connect to other entities.
+func operationUnmarshaller(author identity.Interface, raw json.RawMessage, resolver identity.Resolver) (dag.Operation, error) {
+ var t struct {
+ OperationType OperationType `json:"type"`
+ }
+
+ if err := json.Unmarshal(raw, &t); err != nil {
+ return nil, err
+ }
+
+ var value interface{}
+
+ switch t.OperationType {
+ case AddAdministratorOp:
+ value = &addAdministratorJson{}
+ case RemoveAdministratorOp:
+ value = &removeAdministratorJson{}
+ case SetSignatureRequiredOp:
+ value = &SetSignatureRequired{}
+ default:
+ panic(fmt.Sprintf("unknown operation type %v", t.OperationType))
+ }
+
+ err := json.Unmarshal(raw, &value)
+ if err != nil {
+ return nil, err
+ }
+
+ var op Operation
+
+ switch value := value.(type) {
+ case *SetSignatureRequired:
+ value.author = author
+ op = value
+ case *addAdministratorJson:
+ // We need something less straightforward to deserialize and resolve identities
+ aa := &AddAdministrator{
+ author: author,
+ OperationType: AddAdministratorOp,
+ ToAdd: make([]identity.Interface, len(value.ToAdd)),
+ }
+ for i, stub := range value.ToAdd {
+ iden, err := resolver.ResolveIdentity(stub.Id())
+ if err != nil {
+ return nil, err
+ }
+ aa.ToAdd[i] = iden
+ }
+ op = aa
+ case *removeAdministratorJson:
+ // We need something less straightforward to deserialize and resolve identities
+ ra := &RemoveAdministrator{
+ author: author,
+ OperationType: RemoveAdministratorOp,
+ ToRemove: make([]identity.Interface, len(value.ToRemove)),
+ }
+ for i, stub := range value.ToRemove {
+ iden, err := resolver.ResolveIdentity(stub.Id())
+ if err != nil {
+ return nil, err
+ }
+ ra.ToRemove[i] = iden
+ }
+ op = ra
+ default:
+ panic(fmt.Sprintf("unknown operation type %T", value))
+ }
+
+ return op, nil
+}
+
+// Compile compute a view of the final state. This is what we would use to display the state
+// in a user interface.
+func (pc ProjectConfig) Compile() *Snapshot {
+ // Note: this would benefit from caching, but it's a simple example
+ snap := &Snapshot{
+ // default value
+ Administrator: make(map[identity.Interface]struct{}),
+ SignatureRequired: false,
+ }
+ for _, op := range pc.Operations() {
+ op.(Operation).Apply(snap)
+ }
+ return snap
+}
+
+// Read is a helper to load a ProjectConfig from a Repository
+func Read(repo repository.ClockedRepo, id entity.Id) (*ProjectConfig, error) {
+ e, err := dag.Read(def, repo, identity.NewSimpleResolver(repo), id)
+ if err != nil {
+ return nil, err
+ }
+ return &ProjectConfig{Entity: e}, nil
+}
+
+func Example_entity() {
+ // Note: this example ignore errors for readability
+ // Note: variable names get a little confusing as we are simulating both side in the same function
+
+ // Let's start by defining two git repository and connecting them as remote
+ repoRenePath, _ := os.MkdirTemp("", "")
+ repoIsaacPath, _ := os.MkdirTemp("", "")
+ repoRene, _ := repository.InitGoGitRepo(repoRenePath)
+ repoIsaac, _ := repository.InitGoGitRepo(repoIsaacPath)
+ _ = repoRene.AddRemote("origin", repoIsaacPath)
+ _ = repoIsaac.AddRemote("origin", repoRenePath)
+
+ // Now we need identities and to propagate them
+ rene, _ := identity.NewIdentity(repoRene, "René Descartes", "rene@descartes.fr")
+ isaac, _ := identity.NewIdentity(repoRene, "Isaac Newton", "isaac@newton.uk")
+ _ = rene.Commit(repoRene)
+ _ = isaac.Commit(repoRene)
+ _ = identity.Pull(repoIsaac, "origin")
+
+ // create a new entity
+ confRene := NewProjectConfig()
+
+ // add some operations
+ confRene.Append(NewAddAdministratorOp(rene, rene))
+ confRene.Append(NewAddAdministratorOp(rene, isaac))
+ confRene.Append(NewSetSignatureRequired(rene, true))
+
+ // Rene commits on its own repo
+ _ = confRene.Commit(repoRene)
+
+ // Isaac pull and read the config
+ _ = dag.Pull(def, repoIsaac, identity.NewSimpleResolver(repoIsaac), "origin", isaac)
+ confIsaac, _ := Read(repoIsaac, confRene.Id())
+
+ // Compile gives the current state of the config
+ snapshot := confIsaac.Compile()
+ for admin, _ := range snapshot.Administrator {
+ fmt.Println(admin.DisplayName())
+ }
+
+ // Isaac add more operations
+ confIsaac.Append(NewSetSignatureRequired(isaac, false))
+ reneFromIsaacRepo, _ := identity.ReadLocal(repoIsaac, rene.Id())
+ confIsaac.Append(NewRemoveAdministratorOp(isaac, reneFromIsaacRepo))
+ _ = confIsaac.Commit(repoIsaac)
+}
diff --git a/query/parser_test.go b/query/parser_test.go
index cef01ffd..f71a7b42 100644
--- a/query/parser_test.go
+++ b/query/parser_test.go
@@ -62,6 +62,10 @@ func TestParse(t *testing.T) {
}},
{"sort:unknown", nil},
+ {"label:\"foo:bar\"", &Query{
+ Filters: Filters{Label: []string{"foo:bar"}},
+ }},
+
// KVV
{`metadata:key:"https://www.example.com/"`, &Query{
Filters: Filters{Metadata: []StringPair{{"key", "https://www.example.com/"}}},