These fields can be used to lookup snapshots in kopia as they are part
of the `Source` struct. They are also stored in the kopia sessions but
they should not cause session collisions. Sessions have unique IDs to go
along with this information.
* Implement getting data for directory subtree
Return a slice of collections with data for a given directory subtree in
kopia. Traverse the full directory before creating a DataCollection
instead of sending items as they are found because future
implementations may cause blocking on send. This could reduce
parallelism because the code won't be able to find other directories to
traverse until the files are seen. Kopia also currently loads the entire
directory at once so there's not much benefit to streaming.
System will now continuing pulling data until completion and report all
errors at the end of the run.
* Tests for getting persisted subtree data including some error cases
Update the backup operation to create RestorePoint and RestorePointDetails models in the repository
Add modelstore to the operation to allow backup/restore operations to update/query for corso models
Closes#268
* Factor out common code for getting kopia items
Both directory and single item restore in kopia need to do common tasks
like getting the item in question. Factor out that common code and
adjust tests to prep for directory restore.
Use a slice to back the data instead of adding directly to the channel
for two reasons (this may change in the future though):
* kopia loads all data about a directory at the same time
* consumers of the DataCollection may not pull items from the channel
at a fast rate, which could block adding to the channel. This could
lead to delays in discovering other directories to traverse in
multi-threaded scenarios
* Split KopiaWrapper into repo handle and logic
With ModelStore, multiple structs need a reference to the kopia repo.
Make a small wrapper class (conn) that can open and initialize a repo. The
wrapper handles concurrent closes and opens and does ref counting to
ensure it only drops the kopia handle when the last reference is closed.
Rename KopiaWrapper to Wrapper and keep backup/restore functionality
in it.
Tests that run multiple sub-tests do not use the fields in the test
suite because that would cause the model store instance to be reused
instead of having a new model store instance for each subtest.
* Implement ModelStore GetByType and Get
* Add tests for ModelStore Get functions
* Add stricter "type" checks for loaded models
Take modelType as parameter and check the model in question matches that
type. Adds a little extra layer of protection if models happen to have
the same struct layout.
* Add ModelStore Update operation
* Tests for ModelStore Update function
* Add regression test for error during Update()
Ensure that if an error occurs during a ModelStore update operation the
previously stored model remains unchanged and no new model is visible to
the user.
* Simple Get and Put implementations
Get implementation is currently the one that uses the kopia ID of the
model/manifest.
* Basic tests for ModelStore Get/Put
* Change DataCollection to return channel directly
Precursor to restoring multiple items from kopia. Allows one to keep a
DataCollection open until all items are processed without blocking
consumers of the DataCollection (they can use a select-block if needed).
* Update tests for new DataCollection interface
* Handle context cancellation with DataCollection
* separate tenantID from m365 creds
Now that account.Account is in place, tenant id needs
to get removed from the credential set (it isn't actually
a secret) and placed in the account configuration instead.
* Update how S3 storage structs are generated
* fix bug in printing year of date
* use the name of the test instead of trying to pull name from runtime
* always log the time when making the storage struct
* don't allow user to specify prefix
* Fixup tests for new test storage API
* Update function name and comment
* hook up restore end-to-end
Now that GC and KW both provide restore operations for a
single message, we can hook up the end-to-end restore
process. Integration tests for this change will follow in the
next PR.
* DataCollection and DataStream structs for kopia
* Helper function to get a single item of data
Returns a DataCollection with a single item.
* Test for backing up and restoring a single item
* read test config from local file
Allows local configuration of the test environment by reading from
a .toml config file. If no file exists, the file read is a no-op. Value
prioritization is specified in the readTestConfig() func.
Fill in BackupCollection logic for kopia dirs and test
"Rehydrate" the given DataCollections into a hierarchy of kopia
virtualfs directories, allowing kopia to make a single snapshot of them.
Directory structure is extracted from the information returned by
DataCollection.FullPath().
Add skeleton and test for backing up data with kopia
Full code for launching kopia snapshots is present but code for building
the directory structure to hand kopia is not. Therefore, no data will be
backed up right now.
* adds env-based integration test controls
Some integration tests were running locally without being told
to do so via env variables. This adds some controllers for validating
if and when those tests should run, and for ensuring the env
variables necessary to run the test are available.
* testing helper comments, func name change
* add testing timeOf log, skip gc ci tests
* corrected the test skip in GC_test
* update kopia_test with helpers
* rename /testing files to *_test.go
This will ensure these files do not compile into the release binary.
* Revert "rename /testing files to *_test.go"
This reverts commit 04fd2046cc6d11bf3e3eb98d98a3e84b1bba6366.
POD struct that holds some simple information about a kopia upload. Not
wrapping or returning a handle to a kopia struct to keep upper layers of
corso clear of type dependencies. Struct will help with tests as it can
be returned from public functions in the KopiaWrapper package allowing
more black-box testing. Especially useful since we don't yet have a
restore flow, which means we can't test data == restore(backup(data)).
* Add function to open kopia and keep handle
* Add kopia handle in Repository
* Have Repository manage lifetime of KopiaWrapper
* Have CLI manage lifetime of Repository
* validate required storage props (#85)
Centralizes validation of required storage config properties within
the storage package. Requiremens are checked eagerly at
configuration creation, and lazily at config retrieval.
Additionally, updates /pkg/storage tests to use suites
and assert funcs.
* add validation failure tests to storage
* add common config and encryption passwd
Adds a provider-independent configuration handler, and the
the encryption password config property. The password is used
to encrypt and decrypt the kopia repository properties file.
* fix corso_password in ci.yml
* actually use the corso password in testing
* replace passwd in ci.yml with a secret
* ci.yml secret typo fix
A small collection of changes and code cleanup to successfully run
'corso repo init s3' manually. Adds a default s3 url (might want
this configurable for local testing) and aws session token support.
* adds kopia pkg to handle integration (#25)
internal/kopia will be used to abstract all kopia integration in a
central location.
* defer blob.Storage closure in kopia
blob.Storage objects must get closed at the end of their usage.
This currently isn't getting called.