* introduce exchange info selector support
Adds support in selectors/exchange for queries based on
backup.ExchangeInfo entries. This allows the declaration
of selectors based on non-identifier details such as sender,
subject, or receivedAt time.
Changes Exclude scope matching from being an Any-
match comparator to an All-match. This keeps exclude
and include behavior identical, hopefully making less
confusion for users.
* filter backup details by flags
`backup details` should have its output filtered by the flags provided by
the user. In addition, the selector's FilterDetails should maintain
information (esp service info) about the entries, rather than slicing them
down to only the path reference.
* refactor selector scopes to accept slices
Cli flag implementation was showcasing a toil issue: building
selectors required a lot of repetitious code for combining
inputs into sets of scopes. Since all of these productions
were effectively identical (eg: for each user, then each folder,
create a scope with the ids), the cleaner solution is to pack
that behavior into the scope constructors themselves.
* adds the onError option to operations
Adds the OnError option to operations.Options. OnError tells
corso whether to continue despite concurrent processing
errors, or to exit processing on any error. Also includes flag
support for setting the option. Only adds the options, does
not assert error handling behavior in corso.
* adds store package for wrapping model_store
Introduces the pkg/store package, which contains funcs
for wrapping the model_store with common requests.
This package choice was made for its combination
of being in an accessible place, centralizing functionality
and not introducing circular dependencies.
* add e2e backup-restore integration test
Adds an e2e integration test that starts by backing up
data, and ends with restoring it. Also makes various
amendments to other code where necessary to
facilitate this exercise.
* add output formatting control to cli
Adds the capacity for the CLI to output either a
text table or a json blob to the terminal. Table is
the default behavior, json is toggled with the --json
flag.
These fields can be used to lookup snapshots in kopia as they are part
of the `Source` struct. They are also stored in the kopia sessions but
they should not cause session collisions. Sessions have unique IDs to go
along with this information.
* Implement getting data for directory subtree
Return a slice of collections with data for a given directory subtree in
kopia. Traverse the full directory before creating a DataCollection
instead of sending items as they are found because future
implementations may cause blocking on send. This could reduce
parallelism because the code won't be able to find other directories to
traverse until the files are seen. Kopia also currently loads the entire
directory at once so there's not much benefit to streaming.
System will now continuing pulling data until completion and report all
errors at the end of the run.
* Tests for getting persisted subtree data including some error cases
Update the backup operation to create RestorePoint and RestorePointDetails models in the repository
Add modelstore to the operation to allow backup/restore operations to update/query for corso models
Closes#268
* Factor out common code for getting kopia items
Both directory and single item restore in kopia need to do common tasks
like getting the item in question. Factor out that common code and
adjust tests to prep for directory restore.
* wire selectors up through backup handling
Selectors are implemented enough to add them end-
to-end in some places. This starts with backup
creation, since that's the most stable set of code in
the repo at the moment.
Use a slice to back the data instead of adding directly to the channel
for two reasons (this may change in the future though):
* kopia loads all data about a directory at the same time
* consumers of the DataCollection may not pull items from the channel
at a fast rate, which could block adding to the channel. This could
lead to delays in discovering other directories to traverse in
multi-threaded scenarios
A misuse of variable declaration that overlapped with
var shadowing on 'err' was causing the attachment retry
error to get lost, meaning failures to retrieve attachments
are occurring silently.
* Split KopiaWrapper into repo handle and logic
With ModelStore, multiple structs need a reference to the kopia repo.
Make a small wrapper class (conn) that can open and initialize a repo. The
wrapper handles concurrent closes and opens and does ref counting to
ensure it only drops the kopia handle when the last reference is closed.
Rename KopiaWrapper to Wrapper and keep backup/restore functionality
in it.
Tests that run multiple sub-tests do not use the fields in the test
suite because that would cause the model store instance to be reused
instead of having a new model store instance for each subtest.
* Implement ModelStore GetByType and Get
* Add tests for ModelStore Get functions
* Add stricter "type" checks for loaded models
Take modelType as parameter and check the model in question matches that
type. Adds a little extra layer of protection if models happen to have
the same struct layout.
e2e wiring of persistence is not yet complete.
Will need modelstore integration, and additional
information about file and error counts from kw and gc.
* Add ModelStore Update operation
* Tests for ModelStore Update function
* Add regression test for error during Update()
Ensure that if an error occurs during a ModelStore update operation the
previously stored model remains unchanged and no new model is visible to
the user.
* Simple Get and Put implementations
Get implementation is currently the one that uses the kopia ID of the
model/manifest.
* Basic tests for ModelStore Get/Put
* allow connect to create .corso config file
Current bug: if no .corso config file exists, then repo connect
will always fail, even if it has the correct details to build
a new config file. Solution: allow connect to build a .corso
config file when missing, so long as the operation succeeds
otherwise.
In tandem, return an error whenever a user attempts to
call repo connect with details that do not match the existing
.corso config file.
* add operation results structs
Operations, both backup and restore, need to hold the
results of their operation, and be able to marshal the struct
to json for output.
* Change DataCollection to return channel directly
Precursor to restoring multiple items from kopia. Allows one to keep a
DataCollection open until all items are processed without blocking
consumers of the DataCollection (they can use a select-block if needed).
* Update tests for new DataCollection interface
* Handle context cancellation with DataCollection
GraphConnector exports 2 error types. Recoverable and NonRecoverable. The package also implements error checks to confirm if errors are one of the exported types.
* separate tenantID from m365 creds
Now that account.Account is in place, tenant id needs
to get removed from the credential set (it isn't actually
a secret) and placed in the account configuration instead.
* Update how S3 storage structs are generated
* fix bug in printing year of date
* use the name of the test instead of trying to pull name from runtime
* always log the time when making the storage struct
* don't allow user to specify prefix
* Fixup tests for new test storage API
* Update function name and comment
* move config unions to common code
The configuration union handlers in Storage and Account
overlapped significantly in behavior. Moving those helpers into
a common code folder was requested. Although the behavior
was similar across the files, the types were not, requiring
the addition of generics to solve the need.