Compare commits

...

14 Commits

Author SHA1 Message Date
ryanfkeepers
dab0808ca7 setting aside in case its wanted 2023-07-20 18:37:00 -06:00
ryanfkeepers
5efbd966fd clean up exchange adv restore test 2023-07-20 18:36:32 -06:00
ryanfkeepers
effecdd290 add cli flag for to-resource
adds a cli flag for restoring to a different resource
(mailbox, user, site) from the one stored in the
backup.
2023-07-20 11:03:34 -06:00
ryanfkeepers
4580c8f6c0 check service enabled on restore
Now that restore can target a user who is different from
the backup user, the ConsumeRestoreCollections call
in m365 also needs to check whether the protectedResource
targeted for restore has their services enabled.
2023-07-19 19:36:37 -06:00
ryanfkeepers
23338c2aa3 add restore to alternate resource
adds support for restoring to a resource that
differs from the one whose data appears in the backup.
2023-07-19 18:35:18 -06:00
ryanfkeepers
683fb248e3 add restore things container, move perms
This PR includes two smaller changes that
cascaded to touch a lot of files:
first, introduces inject.RestoreConsumerConfig, which
is a container-of-things for holding common restore configs and options.
second, moves the restorePermissions flag from options
into the restoreConfig.
2023-07-19 14:05:17 -06:00
ryanfkeepers
3a02e3269b look up restore resource if specified
If the restore configuration specifies a protected
resource as a restore target, use that as the destination
for the restore.  First step is to ensure the provided target
can be retrieved and identified.
2023-07-19 13:03:39 -06:00
ryanfkeepers
1272066d50 add library deletion to test cleanup 2023-07-19 10:55:01 -06:00
ryanfkeepers
77a70e88a9 add tests for deleted drives
add tests for restoring to deleted drives,
and fix up some other tests and code
at the same time.
2023-07-19 10:54:33 -06:00
ryanfkeepers
919d59dd69 add integration tests for missing drives 2023-07-19 10:54:33 -06:00
ryanfkeepers
8db03d4cd7 utilize old drive name when restoring
utilizes the old drive name in case the drive
was deleted between backup and restore.
Priority is first to use drives whose ids match the
backup id; second, to use drives whose names
match the backup drive name; third, to fall
back to a new, arbitrary name.
2023-07-19 10:53:59 -06:00
Keepers
3866bfee3b
feed backup drive names into restore (#3842)
adds a cache on the m365 controller
which, through a new interface func,
writes metadata about the backed up
drive ids and names to a cache.  That cache
gets passed into drive-based restores
for more granular usage. Utilization of the cached info coming in the
next change.
2023-07-19 10:52:56 -06:00
Keepers
875eded902
create missing drives on restore (#3795)
when restoring sharepoint, if a document library
was deleted between the time of backup and restore, create a new drive to hold the restored data.
2023-07-18 11:47:45 -06:00
ryanfkeepers
f4b92139bc add api funcs for creating documentLibs
Adds api handlers for creating document libraries in sharepoint.
This is the first step in allowing us to restore drives that were
deleted between backup and restore.
2023-07-18 11:05:19 -06:00
65 changed files with 3055 additions and 713 deletions

View File

@ -19,7 +19,9 @@ inputs:
site: site:
description: Sharepoint site where data is to be purged. description: Sharepoint site where data is to be purged.
libraries: libraries:
description: List of library names within site where data is to be purged. description: List of library names within the site where data is to be purged.
library-prefix:
description: List of library names within the site where the library will get deleted entirely.
folder-prefix: folder-prefix:
description: Name of the folder to be purged. If falsy, will purge the set of static, well known folders instead. description: Name of the folder to be purged. If falsy, will purge the set of static, well known folders instead.
older-than: older-than:
@ -76,7 +78,10 @@ runs:
M365_TENANT_ADMIN_USER: ${{ inputs.m365-admin-user }} M365_TENANT_ADMIN_USER: ${{ inputs.m365-admin-user }}
M365_TENANT_ADMIN_PASSWORD: ${{ inputs.m365-admin-password }} M365_TENANT_ADMIN_PASSWORD: ${{ inputs.m365-admin-password }}
run: | run: |
./onedrivePurge.ps1 -User ${{ inputs.user }} -FolderPrefixPurgeList "${{ inputs.folder-prefix }}".Split(",") -PurgeBeforeTimestamp ${{ inputs.older-than }} ./onedrivePurge.ps1 \
-User ${{ inputs.user }} \
-FolderPrefixPurgeList "${{ inputs.folder-prefix }}".Split(",") \
-PurgeBeforeTimestamp ${{ inputs.older-than }}
################################################################################################################ ################################################################################################################
# Sharepoint # Sharepoint
@ -90,4 +95,8 @@ runs:
M365_TENANT_ADMIN_USER: ${{ inputs.m365-admin-user }} M365_TENANT_ADMIN_USER: ${{ inputs.m365-admin-user }}
M365_TENANT_ADMIN_PASSWORD: ${{ inputs.m365-admin-password }} M365_TENANT_ADMIN_PASSWORD: ${{ inputs.m365-admin-password }}
run: | run: |
./onedrivePurge.ps1 -Site ${{ inputs.site }} -LibraryNameList "${{ inputs.libraries }}".split(",") -FolderPrefixPurgeList ${{ inputs.folder-prefix }} -PurgeBeforeTimestamp ${{ inputs.older-than }} ./onedrivePurge.ps1 -Site ${{ inputs.site }} \
-LibraryNameList "${{ inputs.libraries }}".split(",") \
-FolderPrefixPurgeList ${{ inputs.folder-prefix }} \
-LibraryPrefixDeleteList ${{ inputs.library-prefix }} \
-PurgeBeforeTimestamp ${{ inputs.older-than }}

View File

@ -62,6 +62,7 @@ jobs:
site: ${{ vars[matrix.site] }} site: ${{ vars[matrix.site] }}
folder-prefix: ${{ vars.CORSO_M365_TEST_PREFIXES }} folder-prefix: ${{ vars.CORSO_M365_TEST_PREFIXES }}
libraries: ${{ vars.CORSO_M365_TEST_SITE_LIBRARIES }} libraries: ${{ vars.CORSO_M365_TEST_SITE_LIBRARIES }}
library-prefix: ${{ vars.CORSO_M365_TEST_PREFIXES }}
older-than: ${{ env.HALF_HOUR_AGO }} older-than: ${{ env.HALF_HOUR_AGO }}
azure-client-id: ${{ secrets.CLIENT_ID }} azure-client-id: ${{ secrets.CLIENT_ID }}
azure-client-secret: ${{ secrets.CLIENT_SECRET }} azure-client-secret: ${{ secrets.CLIENT_SECRET }}

View File

@ -7,6 +7,13 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased] (beta) ## [Unreleased] (beta)
### Added
- Restore commands now accept an optional resource override with the `--to-resource` flag. This allows restores to recreate backup data witthin different mailboxes, sites, and users.
### Fixed
- SharePoint document libraries deleted after the last backup can now be restored.
- Restore requires the protected resource to have access to the service being restored.
## [v0.11.0] (beta) - 2023-07-18 ## [v0.11.0] (beta) - 2023-07-18
### Added ### Added
@ -17,6 +24,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Fixed ### Fixed
- Return a ServiceNotEnabled error when a tenant has no active SharePoint license. - Return a ServiceNotEnabled error when a tenant has no active SharePoint license.
- Added retries for http/2 stream connection failures when downloading large item content. - Added retries for http/2 stream connection failures when downloading large item content.
- SharePoint document libraries that were deleted after the last backup can now be restored.
### Known issues ### Known issues
- If a link share is created for an item with inheritance disabled - If a link share is created for an item with inheritance disabled

View File

@ -47,7 +47,7 @@ func prepM365Test(
vpr, cfgFP := tconfig.MakeTempTestConfigClone(t, force) vpr, cfgFP := tconfig.MakeTempTestConfigClone(t, force)
ctx = config.SetViper(ctx, vpr) ctx = config.SetViper(ctx, vpr)
repo, err := repository.Initialize(ctx, acct, st, control.Defaults()) repo, err := repository.Initialize(ctx, acct, st, control.DefaultOptions())
require.NoError(t, err, clues.ToCore(err)) require.NoError(t, err, clues.ToCore(err))
return acct, st, repo, vpr, recorder, cfgFP return acct, st, repo, vpr, recorder, cfgFP

View File

@ -9,11 +9,13 @@ import (
const ( const (
CollisionsFN = "collisions" CollisionsFN = "collisions"
DestinationFN = "destination" DestinationFN = "destination"
ToResourceFN = "to-resource"
) )
var ( var (
CollisionsFV string CollisionsFV string
DestinationFV string DestinationFV string
ToResourceFV string
) )
// AddRestoreConfigFlags adds the restore config flag set. // AddRestoreConfigFlags adds the restore config flag set.
@ -25,5 +27,8 @@ func AddRestoreConfigFlags(cmd *cobra.Command) {
"Sets the behavior for existing item collisions: "+string(control.Skip)+", "+string(control.Copy)+", or "+string(control.Replace)) "Sets the behavior for existing item collisions: "+string(control.Skip)+", "+string(control.Copy)+", or "+string(control.Replace))
fs.StringVar( fs.StringVar(
&DestinationFV, DestinationFN, "", &DestinationFV, DestinationFN, "",
"Overrides the destination where items get restored; '/' places items into their original location") "Overrides the folder where items get restored; '/' places items into their original location")
fs.StringVar(
&ToResourceFV, ToResourceFN, "",
"Overrides the protected resource (mailbox, site, etc) where data gets restored")
} }

View File

@ -200,7 +200,7 @@ func (suite *S3E2ESuite) TestConnectS3Cmd() {
ctx = config.SetViper(ctx, vpr) ctx = config.SetViper(ctx, vpr)
// init the repo first // init the repo first
_, err = repository.Initialize(ctx, account.Account{}, st, control.Defaults()) _, err = repository.Initialize(ctx, account.Account{}, st, control.DefaultOptions())
require.NoError(t, err, clues.ToCore(err)) require.NoError(t, err, clues.ToCore(err))
// then test it // then test it

View File

@ -84,6 +84,7 @@ func (suite *ExchangeUnitSuite) TestAddExchangeCommands() {
"--" + flags.CollisionsFN, testdata.Collisions, "--" + flags.CollisionsFN, testdata.Collisions,
"--" + flags.DestinationFN, testdata.Destination, "--" + flags.DestinationFN, testdata.Destination,
"--" + flags.ToResourceFN, testdata.ToResource,
"--" + flags.AWSAccessKeyFN, testdata.AWSAccessKeyID, "--" + flags.AWSAccessKeyFN, testdata.AWSAccessKeyID,
"--" + flags.AWSSecretAccessKeyFN, testdata.AWSSecretAccessKey, "--" + flags.AWSSecretAccessKeyFN, testdata.AWSSecretAccessKey,
@ -125,6 +126,7 @@ func (suite *ExchangeUnitSuite) TestAddExchangeCommands() {
assert.Equal(t, testdata.Collisions, opts.RestoreCfg.Collisions) assert.Equal(t, testdata.Collisions, opts.RestoreCfg.Collisions)
assert.Equal(t, testdata.Destination, opts.RestoreCfg.Destination) assert.Equal(t, testdata.Destination, opts.RestoreCfg.Destination)
assert.Equal(t, testdata.ToResource, opts.RestoreCfg.ProtectedResource)
assert.Equal(t, testdata.AWSAccessKeyID, flags.AWSAccessKeyFV) assert.Equal(t, testdata.AWSAccessKeyID, flags.AWSAccessKeyFV)
assert.Equal(t, testdata.AWSSecretAccessKey, flags.AWSSecretAccessKeyFV) assert.Equal(t, testdata.AWSSecretAccessKey, flags.AWSSecretAccessKeyFV)

View File

@ -70,6 +70,7 @@ func (suite *OneDriveUnitSuite) TestAddOneDriveCommands() {
"--" + flags.CollisionsFN, testdata.Collisions, "--" + flags.CollisionsFN, testdata.Collisions,
"--" + flags.DestinationFN, testdata.Destination, "--" + flags.DestinationFN, testdata.Destination,
"--" + flags.ToResourceFN, testdata.ToResource,
"--" + flags.AWSAccessKeyFN, testdata.AWSAccessKeyID, "--" + flags.AWSAccessKeyFN, testdata.AWSAccessKeyID,
"--" + flags.AWSSecretAccessKeyFN, testdata.AWSSecretAccessKey, "--" + flags.AWSSecretAccessKeyFN, testdata.AWSSecretAccessKey,
@ -80,6 +81,9 @@ func (suite *OneDriveUnitSuite) TestAddOneDriveCommands() {
"--" + flags.AzureClientSecretFN, testdata.AzureClientSecret, "--" + flags.AzureClientSecretFN, testdata.AzureClientSecret,
"--" + flags.CorsoPassphraseFN, testdata.CorsoPassphrase, "--" + flags.CorsoPassphraseFN, testdata.CorsoPassphrase,
// bool flags
"--" + flags.RestorePermissionsFN,
}) })
cmd.SetOut(new(bytes.Buffer)) // drop output cmd.SetOut(new(bytes.Buffer)) // drop output
@ -99,6 +103,7 @@ func (suite *OneDriveUnitSuite) TestAddOneDriveCommands() {
assert.Equal(t, testdata.Collisions, opts.RestoreCfg.Collisions) assert.Equal(t, testdata.Collisions, opts.RestoreCfg.Collisions)
assert.Equal(t, testdata.Destination, opts.RestoreCfg.Destination) assert.Equal(t, testdata.Destination, opts.RestoreCfg.Destination)
assert.Equal(t, testdata.ToResource, opts.RestoreCfg.ProtectedResource)
assert.Equal(t, testdata.AWSAccessKeyID, flags.AWSAccessKeyFV) assert.Equal(t, testdata.AWSAccessKeyID, flags.AWSAccessKeyFV)
assert.Equal(t, testdata.AWSSecretAccessKey, flags.AWSSecretAccessKeyFV) assert.Equal(t, testdata.AWSSecretAccessKey, flags.AWSSecretAccessKeyFV)
@ -109,6 +114,7 @@ func (suite *OneDriveUnitSuite) TestAddOneDriveCommands() {
assert.Equal(t, testdata.AzureClientSecret, flags.AzureClientSecretFV) assert.Equal(t, testdata.AzureClientSecret, flags.AzureClientSecretFV)
assert.Equal(t, testdata.CorsoPassphrase, flags.CorsoPassphraseFV) assert.Equal(t, testdata.CorsoPassphrase, flags.CorsoPassphraseFV)
assert.True(t, flags.RestorePermissionsFV)
}) })
} }
} }

View File

@ -75,6 +75,7 @@ func (suite *SharePointUnitSuite) TestAddSharePointCommands() {
"--" + flags.CollisionsFN, testdata.Collisions, "--" + flags.CollisionsFN, testdata.Collisions,
"--" + flags.DestinationFN, testdata.Destination, "--" + flags.DestinationFN, testdata.Destination,
"--" + flags.ToResourceFN, testdata.ToResource,
"--" + flags.AWSAccessKeyFN, testdata.AWSAccessKeyID, "--" + flags.AWSAccessKeyFN, testdata.AWSAccessKeyID,
"--" + flags.AWSSecretAccessKeyFN, testdata.AWSSecretAccessKey, "--" + flags.AWSSecretAccessKeyFN, testdata.AWSSecretAccessKey,
@ -85,6 +86,9 @@ func (suite *SharePointUnitSuite) TestAddSharePointCommands() {
"--" + flags.AzureClientSecretFN, testdata.AzureClientSecret, "--" + flags.AzureClientSecretFN, testdata.AzureClientSecret,
"--" + flags.CorsoPassphraseFN, testdata.CorsoPassphrase, "--" + flags.CorsoPassphraseFN, testdata.CorsoPassphrase,
// bool flags
"--" + flags.RestorePermissionsFN,
}) })
cmd.SetOut(new(bytes.Buffer)) // drop output cmd.SetOut(new(bytes.Buffer)) // drop output
@ -111,6 +115,7 @@ func (suite *SharePointUnitSuite) TestAddSharePointCommands() {
assert.Equal(t, testdata.Collisions, opts.RestoreCfg.Collisions) assert.Equal(t, testdata.Collisions, opts.RestoreCfg.Collisions)
assert.Equal(t, testdata.Destination, opts.RestoreCfg.Destination) assert.Equal(t, testdata.Destination, opts.RestoreCfg.Destination)
assert.Equal(t, testdata.ToResource, opts.RestoreCfg.ProtectedResource)
assert.Equal(t, testdata.AWSAccessKeyID, flags.AWSAccessKeyFV) assert.Equal(t, testdata.AWSAccessKeyID, flags.AWSAccessKeyFV)
assert.Equal(t, testdata.AWSSecretAccessKey, flags.AWSSecretAccessKeyFV) assert.Equal(t, testdata.AWSSecretAccessKey, flags.AWSSecretAccessKeyFV)
@ -121,6 +126,9 @@ func (suite *SharePointUnitSuite) TestAddSharePointCommands() {
assert.Equal(t, testdata.AzureClientSecret, flags.AzureClientSecretFV) assert.Equal(t, testdata.AzureClientSecret, flags.AzureClientSecretFV)
assert.Equal(t, testdata.CorsoPassphrase, flags.CorsoPassphraseFV) assert.Equal(t, testdata.CorsoPassphrase, flags.CorsoPassphraseFV)
// bool flags
assert.True(t, flags.RestorePermissionsFV)
}) })
} }
} }

View File

@ -8,14 +8,13 @@ import (
// Control produces the control options based on the user's flags. // Control produces the control options based on the user's flags.
func Control() control.Options { func Control() control.Options {
opt := control.Defaults() opt := control.DefaultOptions()
if flags.FailFastFV { if flags.FailFastFV {
opt.FailureHandling = control.FailFast opt.FailureHandling = control.FailFast
} }
opt.DisableMetrics = flags.NoStatsFV opt.DisableMetrics = flags.NoStatsFV
opt.RestorePermissions = flags.RestorePermissionsFV
opt.SkipReduce = flags.SkipReduceFV opt.SkipReduce = flags.SkipReduceFV
opt.ToggleFeatures.DisableIncrementals = flags.DisableIncrementalsFV opt.ToggleFeatures.DisableIncrementals = flags.DisableIncrementalsFV
opt.ToggleFeatures.DisableDelta = flags.DisableDeltaFV opt.ToggleFeatures.DisableDelta = flags.DisableDeltaFV

View File

@ -18,16 +18,20 @@ type RestoreCfgOpts struct {
// DTTMFormat is the timestamp format appended // DTTMFormat is the timestamp format appended
// to the default folder name. Defaults to // to the default folder name. Defaults to
// dttm.HumanReadable. // dttm.HumanReadable.
DTTMFormat dttm.TimeFormat DTTMFormat dttm.TimeFormat
ProtectedResource string
RestorePermissions bool
Populated flags.PopulatedFlags Populated flags.PopulatedFlags
} }
func makeRestoreCfgOpts(cmd *cobra.Command) RestoreCfgOpts { func makeRestoreCfgOpts(cmd *cobra.Command) RestoreCfgOpts {
return RestoreCfgOpts{ return RestoreCfgOpts{
Collisions: flags.CollisionsFV, Collisions: flags.CollisionsFV,
Destination: flags.DestinationFV, Destination: flags.DestinationFV,
DTTMFormat: dttm.HumanReadable, DTTMFormat: dttm.HumanReadable,
ProtectedResource: flags.ToResourceFV,
RestorePermissions: flags.RestorePermissionsFV,
// populated contains the list of flags that appear in the // populated contains the list of flags that appear in the
// command, according to pflags. Use this to differentiate // command, according to pflags. Use this to differentiate
@ -67,6 +71,9 @@ func MakeRestoreConfig(
restoreCfg.Location = opts.Destination restoreCfg.Location = opts.Destination
} }
restoreCfg.ProtectedResource = opts.ProtectedResource
restoreCfg.IncludePermissions = opts.RestorePermissions
Infof(ctx, "Restoring to folder %s", restoreCfg.Location) Infof(ctx, "Restoring to folder %s", restoreCfg.Location)
return restoreCfg return restoreCfg

View File

@ -68,18 +68,18 @@ func (suite *RestoreCfgUnitSuite) TestValidateRestoreConfigFlags() {
} }
func (suite *RestoreCfgUnitSuite) TestMakeRestoreConfig() { func (suite *RestoreCfgUnitSuite) TestMakeRestoreConfig() {
rco := &RestoreCfgOpts{
Collisions: "collisions",
Destination: "destination",
}
table := []struct { table := []struct {
name string name string
rco *RestoreCfgOpts
populated flags.PopulatedFlags populated flags.PopulatedFlags
expect control.RestoreConfig expect control.RestoreConfig
}{ }{
{ {
name: "not populated", name: "not populated",
rco: &RestoreCfgOpts{
Collisions: "collisions",
Destination: "destination",
},
populated: flags.PopulatedFlags{}, populated: flags.PopulatedFlags{},
expect: control.RestoreConfig{ expect: control.RestoreConfig{
OnCollision: control.Skip, OnCollision: control.Skip,
@ -88,6 +88,10 @@ func (suite *RestoreCfgUnitSuite) TestMakeRestoreConfig() {
}, },
{ {
name: "collision populated", name: "collision populated",
rco: &RestoreCfgOpts{
Collisions: "collisions",
Destination: "destination",
},
populated: flags.PopulatedFlags{ populated: flags.PopulatedFlags{
flags.CollisionsFN: {}, flags.CollisionsFN: {},
}, },
@ -98,6 +102,10 @@ func (suite *RestoreCfgUnitSuite) TestMakeRestoreConfig() {
}, },
{ {
name: "destination populated", name: "destination populated",
rco: &RestoreCfgOpts{
Collisions: "collisions",
Destination: "destination",
},
populated: flags.PopulatedFlags{ populated: flags.PopulatedFlags{
flags.DestinationFN: {}, flags.DestinationFN: {},
}, },
@ -108,6 +116,10 @@ func (suite *RestoreCfgUnitSuite) TestMakeRestoreConfig() {
}, },
{ {
name: "both populated", name: "both populated",
rco: &RestoreCfgOpts{
Collisions: "collisions",
Destination: "destination",
},
populated: flags.PopulatedFlags{ populated: flags.PopulatedFlags{
flags.CollisionsFN: {}, flags.CollisionsFN: {},
flags.DestinationFN: {}, flags.DestinationFN: {},
@ -117,6 +129,23 @@ func (suite *RestoreCfgUnitSuite) TestMakeRestoreConfig() {
Location: "destination", Location: "destination",
}, },
}, },
{
name: "with restore permissions",
rco: &RestoreCfgOpts{
Collisions: "collisions",
Destination: "destination",
RestorePermissions: true,
},
populated: flags.PopulatedFlags{
flags.CollisionsFN: {},
flags.DestinationFN: {},
},
expect: control.RestoreConfig{
OnCollision: control.CollisionPolicy("collisions"),
Location: "destination",
IncludePermissions: true,
},
},
} }
for _, test := range table { for _, test := range table {
suite.Run(test.name, func() { suite.Run(test.name, func() {
@ -125,12 +154,13 @@ func (suite *RestoreCfgUnitSuite) TestMakeRestoreConfig() {
ctx, flush := tester.NewContext(t) ctx, flush := tester.NewContext(t)
defer flush() defer flush()
opts := *rco opts := *test.rco
opts.Populated = test.populated opts.Populated = test.populated
result := MakeRestoreConfig(ctx, opts) result := MakeRestoreConfig(ctx, opts)
assert.Equal(t, test.expect.OnCollision, result.OnCollision) assert.Equal(t, test.expect.OnCollision, result.OnCollision)
assert.Contains(t, result.Location, test.expect.Location) assert.Contains(t, result.Location, test.expect.Location)
assert.Equal(t, test.expect.IncludePermissions, result.IncludePermissions)
}) })
} }
} }

View File

@ -46,6 +46,7 @@ var (
Collisions = "collisions" Collisions = "collisions"
Destination = "destination" Destination = "destination"
ToResource = "toResource"
RestorePermissions = true RestorePermissions = true
AzureClientID = "testAzureClientId" AzureClientID = "testAzureClientId"

View File

@ -21,12 +21,12 @@ import (
odStub "github.com/alcionai/corso/src/internal/m365/onedrive/stub" odStub "github.com/alcionai/corso/src/internal/m365/onedrive/stub"
"github.com/alcionai/corso/src/internal/m365/resource" "github.com/alcionai/corso/src/internal/m365/resource"
m365Stub "github.com/alcionai/corso/src/internal/m365/stub" m365Stub "github.com/alcionai/corso/src/internal/m365/stub"
"github.com/alcionai/corso/src/internal/operations/inject"
"github.com/alcionai/corso/src/internal/tester" "github.com/alcionai/corso/src/internal/tester"
"github.com/alcionai/corso/src/internal/version" "github.com/alcionai/corso/src/internal/version"
"github.com/alcionai/corso/src/pkg/account" "github.com/alcionai/corso/src/pkg/account"
"github.com/alcionai/corso/src/pkg/backup/details" "github.com/alcionai/corso/src/pkg/backup/details"
"github.com/alcionai/corso/src/pkg/control" "github.com/alcionai/corso/src/pkg/control"
"github.com/alcionai/corso/src/pkg/control/testdata"
"github.com/alcionai/corso/src/pkg/count" "github.com/alcionai/corso/src/pkg/count"
"github.com/alcionai/corso/src/pkg/credentials" "github.com/alcionai/corso/src/pkg/credentials"
"github.com/alcionai/corso/src/pkg/fault" "github.com/alcionai/corso/src/pkg/fault"
@ -104,7 +104,15 @@ func generateAndRestoreItems(
print.Infof(ctx, "Generating %d %s items in %s\n", howMany, cat, Destination) print.Infof(ctx, "Generating %d %s items in %s\n", howMany, cat, Destination)
return ctrl.ConsumeRestoreCollections(ctx, version.Backup, sel, restoreCfg, opts, dataColls, errs, ctr) rcc := inject.RestoreConsumerConfig{
BackupVersion: version.Backup,
Options: opts,
ProtectedResource: sel,
RestoreConfig: restoreCfg,
Selector: sel,
}
return ctrl.ConsumeRestoreCollections(ctx, rcc, dataColls, errs, ctr)
} }
// ------------------------------------------------------------------------------------------ // ------------------------------------------------------------------------------------------
@ -144,7 +152,7 @@ func getControllerAndVerifyResourceOwner(
return nil, account.Account{}, nil, clues.Wrap(err, "connecting to graph api") return nil, account.Account{}, nil, clues.Wrap(err, "connecting to graph api")
} }
id, _, err := ctrl.PopulateOwnerIDAndNamesFrom(ctx, resourceOwner, nil) id, _, err := ctrl.PopulateProtectedResourceIDAndName(ctx, resourceOwner, nil)
if err != nil { if err != nil {
return nil, account.Account{}, nil, clues.Wrap(err, "verifying user") return nil, account.Account{}, nil, clues.Wrap(err, "verifying user")
} }
@ -407,10 +415,8 @@ func generateAndRestoreDriveItems(
// input, // input,
// version.Backup) // version.Backup)
opts := control.Options{ opts := control.DefaultOptions()
RestorePermissions: true, restoreCfg.IncludePermissions = true
ToggleFeatures: control.Toggles{},
}
config := m365Stub.ConfigInfo{ config := m365Stub.ConfigInfo{
Opts: opts, Opts: opts,
@ -418,7 +424,7 @@ func generateAndRestoreDriveItems(
Service: service, Service: service,
Tenant: tenantID, Tenant: tenantID,
ResourceOwners: []string{resourceOwner}, ResourceOwners: []string{resourceOwner},
RestoreCfg: testdata.DefaultRestoreConfig(""), RestoreCfg: restoreCfg,
} }
_, _, collections, _, err := m365Stub.GetCollectionsAndExpected( _, _, collections, _, err := m365Stub.GetCollectionsAndExpected(
@ -429,5 +435,13 @@ func generateAndRestoreDriveItems(
return nil, err return nil, err
} }
return ctrl.ConsumeRestoreCollections(ctx, version.Backup, sel, restoreCfg, opts, collections, errs, ctr) rcc := inject.RestoreConsumerConfig{
BackupVersion: version.Backup,
Options: opts,
ProtectedResource: sel,
RestoreConfig: restoreCfg,
Selector: sel,
}
return ctrl.ConsumeRestoreCollections(ctx, rcc, collections, errs, ctr)
} }

View File

@ -72,7 +72,7 @@ func handleExchangeEmailFactory(cmd *cobra.Command, args []string) error {
subject, body, body, subject, body, body,
now, now, now, now) now, now, now, now)
}, },
control.Defaults(), control.DefaultOptions(),
errs, errs,
count.New()) count.New())
if err != nil { if err != nil {
@ -121,7 +121,7 @@ func handleExchangeCalendarEventFactory(cmd *cobra.Command, args []string) error
exchMock.NoAttachments, exchMock.NoCancelledOccurrences, exchMock.NoAttachments, exchMock.NoCancelledOccurrences,
exchMock.NoExceptionOccurrences) exchMock.NoExceptionOccurrences)
}, },
control.Defaults(), control.DefaultOptions(),
errs, errs,
count.New()) count.New())
if err != nil { if err != nil {
@ -172,7 +172,7 @@ func handleExchangeContactFactory(cmd *cobra.Command, args []string) error {
"123-456-7890", "123-456-7890",
) )
}, },
control.Defaults(), control.DefaultOptions(),
errs, errs,
count.New()) count.New())
if err != nil { if err != nil {

View File

@ -19,14 +19,17 @@ Param (
[datetime]$PurgeBeforeTimestamp, [datetime]$PurgeBeforeTimestamp,
[Parameter(Mandatory = $True, HelpMessage = "Purge folders with this prefix")] [Parameter(Mandatory = $True, HelpMessage = "Purge folders with this prefix")]
[String[]]$FolderPrefixPurgeList [String[]]$FolderPrefixPurgeList,
[Parameter(Mandatory = $False, HelpMessage = "Delete document libraries with this prefix")]
[String[]]$LibraryPrefixDeleteList
) )
Set-StrictMode -Version 2.0 Set-StrictMode -Version 2.0
# Attempt to set network timeout to 10min # Attempt to set network timeout to 10min
[System.Net.ServicePointManager]::MaxServicePointIdleTime = 600000 [System.Net.ServicePointManager]::MaxServicePointIdleTime = 600000
function Get-TimestampFromName { function Get-TimestampFromFolderName {
param ( param (
[Parameter(Mandatory = $True, HelpMessage = "Folder ")] [Parameter(Mandatory = $True, HelpMessage = "Folder ")]
[Microsoft.SharePoint.Client.Folder]$folder [Microsoft.SharePoint.Client.Folder]$folder
@ -54,6 +57,36 @@ function Get-TimestampFromName {
return $timestamp return $timestamp
} }
function Get-TimestampFromListName {
param (
[Parameter(Mandatory = $True, HelpMessage = "List ")]
[Microsoft.SharePoint.Client.List]$list
)
$name = $list.Title
#fallback on list create time
[datetime]$timestamp = $list.LastItemUserModifiedDate
try {
# Assumes that the timestamp is at the end and starts with yyyy-mm-ddT and is ISO8601
if ($name -imatch "(\d{4}}-\d{2}-\d{2}T.*)") {
$timestamp = [System.Convert]::ToDatetime($Matches.0)
}
# Assumes that the timestamp is at the end and starts with dd-MMM-yyyy_HH-MM-SS
if ($name -imatch "(\d{2}-[a-zA-Z]{3}-\d{4}_\d{2}-\d{2}-\d{2})") {
$timestamp = [datetime]::ParseExact($Matches.0, "dd-MMM-yyyy_HH-mm-ss", [CultureInfo]::InvariantCulture, "AssumeUniversal")
}
}
catch {}
Write-Verbose "List: $name, create timestamp: $timestamp"
return $timestamp
}
function Purge-Library { function Purge-Library {
[CmdletBinding(SupportsShouldProcess)] [CmdletBinding(SupportsShouldProcess)]
Param ( Param (
@ -77,7 +110,7 @@ function Purge-Library {
foreach ($f in $folders) { foreach ($f in $folders) {
$folderName = $f.Name $folderName = $f.Name
$createTime = Get-TimestampFromName -Folder $f $createTime = Get-TimestampFromFolderName -Folder $f
if ($PurgeBeforeTimestamp -gt $createTime) { if ($PurgeBeforeTimestamp -gt $createTime) {
foreach ($p in $FolderPrefixPurgeList) { foreach ($p in $FolderPrefixPurgeList) {
@ -97,7 +130,7 @@ function Purge-Library {
if ($f.ServerRelativeUrl -imatch "$SiteSuffix/{0,1}(.+?)/{0,1}$folderName$") { if ($f.ServerRelativeUrl -imatch "$SiteSuffix/{0,1}(.+?)/{0,1}$folderName$") {
$siteRelativeParentPath = $Matches.1 $siteRelativeParentPath = $Matches.1
} }
if ($PSCmdlet.ShouldProcess("Name: " + $f.Name + " Parent: " + $siteRelativeParentPath, "Remove folder")) { if ($PSCmdlet.ShouldProcess("Name: " + $f.Name + " Parent: " + $siteRelativeParentPath, "Remove folder")) {
Write-Host "Deleting folder: "$f.Name" with parent: $siteRelativeParentPath" Write-Host "Deleting folder: "$f.Name" with parent: $siteRelativeParentPath"
try { try {
@ -110,6 +143,54 @@ function Purge-Library {
} }
} }
function Delete-LibraryByPrefix {
[CmdletBinding(SupportsShouldProcess)]
Param (
[Parameter(Mandatory = $True, HelpMessage = "Document library root")]
[String]$LibraryNamePrefix,
[Parameter(Mandatory = $True, HelpMessage = "Purge folders before this date time (UTC)")]
[datetime]$PurgeBeforeTimestamp,
[Parameter(Mandatory = $True, HelpMessage = "Site suffix")]
[String[]]$SiteSuffix
)
Write-Host "`nDeleting library: $LibraryNamePrefix"
$listsToDelete = @()
$lists = Get-PnPList
foreach ($l in $lists) {
$listName = $l.Title
$createTime = Get-TimestampFromListName -List $l
if ($PurgeBeforeTimestamp -gt $createTime) {
foreach ($p in $FolderPrefixPurgeList) {
if ($listName -like "$p*") {
$listsToDelete += $l
}
}
}
}
Write-Host "Found"$listsToDelete.count"lists to delete"
foreach ($l in $listsToDelete) {
$listName = $l.Title
if ($PSCmdlet.ShouldProcess("Name: " + $l.Title + "Remove folder")) {
Write-Host "Deleting list: "$l.Title
try {
Remove-PnPList -Identity $l.Id -Force
}
catch [ System.Management.Automation.ItemNotFoundException ] {
Write-Host "List: "$f.Name" is already deleted. Skipping..."
}
}
}
}
######## MAIN ######### ######## MAIN #########
# Setup SharePointPnP # Setup SharePointPnP
@ -176,4 +257,8 @@ $FolderPrefixPurgeList = $FolderPrefixPurgeList | ForEach-Object { @($_.Split(',
foreach ($library in $LibraryNameList) { foreach ($library in $LibraryNameList) {
Purge-Library -LibraryName $library -PurgeBeforeTimestamp $PurgeBeforeTimestamp -FolderPrefixPurgeList $FolderPrefixPurgeList -SiteSuffix $siteSuffix Purge-Library -LibraryName $library -PurgeBeforeTimestamp $PurgeBeforeTimestamp -FolderPrefixPurgeList $FolderPrefixPurgeList -SiteSuffix $siteSuffix
} }
foreach ($libraryPfx in $LibraryPrefixDeleteList) {
Delete-LibraryByPrefix -LibraryNamePrefix $libraryPfx -PurgeBeforeTimestamp $PurgeBeforeTimestamp -SiteSuffix $siteSuffix
}

View File

@ -28,6 +28,10 @@ type is struct {
name string name string
} }
func NewProvider(id, name string) *is {
return &is{id, name}
}
func (is is) ID() string { return is.id } func (is is) ID() string { return is.id }
func (is is) Name() string { return is.name } func (is is) Name() string { return is.name }
@ -40,6 +44,11 @@ type Cacher interface {
ProviderForName(id string) Provider ProviderForName(id string) Provider
} }
type CacheBuilder interface {
Add(id, name string)
Cacher
}
var _ Cacher = &cache{} var _ Cacher = &cache{}
type cache struct { type cache struct {
@ -47,17 +56,29 @@ type cache struct {
nameToID map[string]string nameToID map[string]string
} }
func NewCache(idToName map[string]string) cache { func NewCache(idToName map[string]string) *cache {
nti := make(map[string]string, len(idToName)) c := cache{
idToName: map[string]string{},
for id, name := range idToName { nameToID: map[string]string{},
nti[name] = id
} }
return cache{ if len(idToName) > 0 {
idToName: idToName, nti := make(map[string]string, len(idToName))
nameToID: nti,
for id, name := range idToName {
nti[name] = id
}
c.idToName = idToName
c.nameToID = nti
} }
return &c
}
func (c *cache) Add(id, name string) {
c.idToName[strings.ToLower(id)] = name
c.nameToID[strings.ToLower(name)] = id
} }
// IDOf returns the id associated with the given name. // IDOf returns the id associated with the given name.

View File

@ -0,0 +1,60 @@
package idname
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/suite"
"github.com/alcionai/corso/src/internal/tester"
)
type IDNameUnitSuite struct {
tester.Suite
}
func TestIDNameUnitSuite(t *testing.T) {
suite.Run(t, &IDNameUnitSuite{Suite: tester.NewUnitSuite(t)})
}
func (suite *IDNameUnitSuite) TestAdd() {
table := []struct {
name string
inID string
inName string
searchID string
searchName string
}{
{
name: "basic",
inID: "foo",
inName: "bar",
searchID: "foo",
searchName: "bar",
},
{
name: "change casing",
inID: "FNORDS",
inName: "SMARF",
searchID: "fnords",
searchName: "smarf",
},
}
for _, test := range table {
suite.Run(test.name, func() {
t := suite.T()
cache := NewCache(nil)
cache.Add(test.inID, test.inName)
id, found := cache.IDOf(test.searchName)
assert.True(t, found)
assert.Equal(t, test.inID, id)
name, found := cache.NameOf(test.searchID)
assert.True(t, found)
assert.Equal(t, test.inName, name)
})
}
}

View File

@ -52,7 +52,7 @@ func (suite *EventsIntegrationSuite) TestNewBus() {
) )
require.NoError(t, err, clues.ToCore(err)) require.NoError(t, err, clues.ToCore(err))
b, err := events.NewBus(ctx, s, a.ID(), control.Defaults()) b, err := events.NewBus(ctx, s, a.ID(), control.DefaultOptions())
require.NotEmpty(t, b) require.NotEmpty(t, b)
require.NoError(t, err, clues.ToCore(err)) require.NoError(t, err, clues.ToCore(err))

View File

@ -61,7 +61,7 @@ func (ctrl *Controller) ProduceBackupCollections(
serviceEnabled, canMakeDeltaQueries, err := checkServiceEnabled( serviceEnabled, canMakeDeltaQueries, err := checkServiceEnabled(
ctx, ctx,
ctrl.AC.Users(), ctrl.AC.Users(),
path.ServiceType(sels.Service), sels.PathService(),
sels.DiscreteOwner) sels.DiscreteOwner)
if err != nil { if err != nil {
return nil, nil, false, err return nil, nil, false, err

View File

@ -120,7 +120,7 @@ func (suite *DataCollectionIntgSuite) TestExchangeDataCollection() {
sel := test.getSelector(t) sel := test.getSelector(t)
uidn := inMock.NewProvider(sel.ID(), sel.Name()) uidn := inMock.NewProvider(sel.ID(), sel.Name())
ctrlOpts := control.Defaults() ctrlOpts := control.DefaultOptions()
ctrlOpts.ToggleFeatures.DisableDelta = !canMakeDeltaQueries ctrlOpts.ToggleFeatures.DisableDelta = !canMakeDeltaQueries
collections, excludes, canUsePreviousBackup, err := exchange.ProduceBackupCollections( collections, excludes, canUsePreviousBackup, err := exchange.ProduceBackupCollections(
@ -239,7 +239,7 @@ func (suite *DataCollectionIntgSuite) TestDataCollections_invalidResourceOwner()
test.getSelector(t), test.getSelector(t),
nil, nil,
version.NoBackup, version.NoBackup,
control.Defaults(), control.DefaultOptions(),
fault.New(true)) fault.New(true))
assert.Error(t, err, clues.ToCore(err)) assert.Error(t, err, clues.ToCore(err))
assert.False(t, canUsePreviousBackup, "can use previous backup") assert.False(t, canUsePreviousBackup, "can use previous backup")
@ -296,7 +296,7 @@ func (suite *DataCollectionIntgSuite) TestSharePointDataCollection() {
nil, nil,
ctrl.credentials, ctrl.credentials,
ctrl, ctrl,
control.Defaults(), control.DefaultOptions(),
fault.New(true)) fault.New(true))
require.NoError(t, err, clues.ToCore(err)) require.NoError(t, err, clues.ToCore(err))
assert.True(t, canUsePreviousBackup, "can use previous backup") assert.True(t, canUsePreviousBackup, "can use previous backup")
@ -367,7 +367,7 @@ func (suite *SPCollectionIntgSuite) TestCreateSharePointCollection_Libraries() {
siteIDs = []string{siteID} siteIDs = []string{siteID}
) )
id, name, err := ctrl.PopulateOwnerIDAndNamesFrom(ctx, siteID, nil) id, name, err := ctrl.PopulateProtectedResourceIDAndName(ctx, siteID, nil)
require.NoError(t, err, clues.ToCore(err)) require.NoError(t, err, clues.ToCore(err))
sel := selectors.NewSharePointBackup(siteIDs) sel := selectors.NewSharePointBackup(siteIDs)
@ -381,7 +381,7 @@ func (suite *SPCollectionIntgSuite) TestCreateSharePointCollection_Libraries() {
sel.Selector, sel.Selector,
nil, nil,
version.NoBackup, version.NoBackup,
control.Defaults(), control.DefaultOptions(),
fault.New(true)) fault.New(true))
require.NoError(t, err, clues.ToCore(err)) require.NoError(t, err, clues.ToCore(err))
assert.True(t, canUsePreviousBackup, "can use previous backup") assert.True(t, canUsePreviousBackup, "can use previous backup")
@ -414,7 +414,7 @@ func (suite *SPCollectionIntgSuite) TestCreateSharePointCollection_Lists() {
siteIDs = []string{siteID} siteIDs = []string{siteID}
) )
id, name, err := ctrl.PopulateOwnerIDAndNamesFrom(ctx, siteID, nil) id, name, err := ctrl.PopulateProtectedResourceIDAndName(ctx, siteID, nil)
require.NoError(t, err, clues.ToCore(err)) require.NoError(t, err, clues.ToCore(err))
sel := selectors.NewSharePointBackup(siteIDs) sel := selectors.NewSharePointBackup(siteIDs)
@ -428,7 +428,7 @@ func (suite *SPCollectionIntgSuite) TestCreateSharePointCollection_Lists() {
sel.Selector, sel.Selector,
nil, nil,
version.NoBackup, version.NoBackup,
control.Defaults(), control.DefaultOptions(),
fault.New(true)) fault.New(true))
require.NoError(t, err, clues.ToCore(err)) require.NoError(t, err, clues.ToCore(err))
assert.True(t, canUsePreviousBackup, "can use previous backup") assert.True(t, canUsePreviousBackup, "can use previous backup")

View File

@ -14,6 +14,7 @@ import (
"github.com/alcionai/corso/src/internal/m365/support" "github.com/alcionai/corso/src/internal/m365/support"
"github.com/alcionai/corso/src/internal/operations/inject" "github.com/alcionai/corso/src/internal/operations/inject"
"github.com/alcionai/corso/src/pkg/account" "github.com/alcionai/corso/src/pkg/account"
"github.com/alcionai/corso/src/pkg/backup/details"
"github.com/alcionai/corso/src/pkg/control" "github.com/alcionai/corso/src/pkg/control"
"github.com/alcionai/corso/src/pkg/path" "github.com/alcionai/corso/src/pkg/path"
"github.com/alcionai/corso/src/pkg/services/m365/api" "github.com/alcionai/corso/src/pkg/services/m365/api"
@ -47,6 +48,11 @@ type Controller struct {
// mutex used to synchronize updates to `status` // mutex used to synchronize updates to `status`
mu sync.Mutex mu sync.Mutex
status support.ControllerOperationStatus // contains the status of the last run status status support.ControllerOperationStatus // contains the status of the last run status
// backupDriveIDNames is populated on restore. It maps the backup's
// drive names to their id. Primarily for use when creating or looking
// up a new drive.
backupDriveIDNames idname.CacheBuilder
} }
func NewController( func NewController(
@ -77,10 +83,11 @@ func NewController(
AC: ac, AC: ac,
IDNameLookup: idname.NewCache(nil), IDNameLookup: idname.NewCache(nil),
credentials: creds, credentials: creds,
ownerLookup: rCli, ownerLookup: rCli,
tenant: acct.ID(), tenant: acct.ID(),
wg: &sync.WaitGroup{}, wg: &sync.WaitGroup{},
backupDriveIDNames: idname.NewCache(nil),
} }
return &ctrl, nil return &ctrl, nil
@ -142,6 +149,16 @@ func (ctrl *Controller) incrementAwaitingMessages() {
ctrl.wg.Add(1) ctrl.wg.Add(1)
} }
func (ctrl *Controller) CacheItemInfo(dii details.ItemInfo) {
if dii.SharePoint != nil {
ctrl.backupDriveIDNames.Add(dii.SharePoint.DriveID, dii.SharePoint.DriveName)
}
if dii.OneDrive != nil {
ctrl.backupDriveIDNames.Add(dii.OneDrive.DriveID, dii.OneDrive.DriveName)
}
}
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
// Resource Lookup Handling // Resource Lookup Handling
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
@ -228,7 +245,7 @@ func (r resourceClient) getOwnerIDAndNameFrom(
return id, name, nil return id, name, nil
} }
// PopulateOwnerIDAndNamesFrom takes the provided owner identifier and produces // PopulateProtectedResourceIDAndName takes the provided owner identifier and produces
// the owner's name and ID from that value. Returns an error if the owner is // the owner's name and ID from that value. Returns an error if the owner is
// not recognized by the current tenant. // not recognized by the current tenant.
// //
@ -236,7 +253,7 @@ func (r resourceClient) getOwnerIDAndNameFrom(
// the tenant before reaching this step. In that case, the data gets handed // the tenant before reaching this step. In that case, the data gets handed
// down for this func to consume instead of performing further queries. The // down for this func to consume instead of performing further queries. The
// data gets stored inside the controller instance for later re-use. // data gets stored inside the controller instance for later re-use.
func (ctrl *Controller) PopulateOwnerIDAndNamesFrom( func (ctrl *Controller) PopulateProtectedResourceIDAndName(
ctx context.Context, ctx context.Context,
owner string, // input value, can be either id or name owner string, // input value, can be either id or name
ins idname.Cacher, ins idname.Cacher,

View File

@ -12,6 +12,8 @@ import (
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"github.com/stretchr/testify/suite" "github.com/stretchr/testify/suite"
"github.com/alcionai/corso/src/internal/common/dttm"
"github.com/alcionai/corso/src/internal/common/idname"
inMock "github.com/alcionai/corso/src/internal/common/idname/mock" inMock "github.com/alcionai/corso/src/internal/common/idname/mock"
"github.com/alcionai/corso/src/internal/data" "github.com/alcionai/corso/src/internal/data"
exchMock "github.com/alcionai/corso/src/internal/m365/exchange/mock" exchMock "github.com/alcionai/corso/src/internal/m365/exchange/mock"
@ -19,9 +21,11 @@ import (
"github.com/alcionai/corso/src/internal/m365/resource" "github.com/alcionai/corso/src/internal/m365/resource"
"github.com/alcionai/corso/src/internal/m365/stub" "github.com/alcionai/corso/src/internal/m365/stub"
"github.com/alcionai/corso/src/internal/m365/support" "github.com/alcionai/corso/src/internal/m365/support"
"github.com/alcionai/corso/src/internal/operations/inject"
"github.com/alcionai/corso/src/internal/tester" "github.com/alcionai/corso/src/internal/tester"
"github.com/alcionai/corso/src/internal/tester/tconfig" "github.com/alcionai/corso/src/internal/tester/tconfig"
"github.com/alcionai/corso/src/internal/version" "github.com/alcionai/corso/src/internal/version"
"github.com/alcionai/corso/src/pkg/backup/details"
"github.com/alcionai/corso/src/pkg/control" "github.com/alcionai/corso/src/pkg/control"
"github.com/alcionai/corso/src/pkg/control/testdata" "github.com/alcionai/corso/src/pkg/control/testdata"
"github.com/alcionai/corso/src/pkg/count" "github.com/alcionai/corso/src/pkg/count"
@ -220,7 +224,7 @@ func (suite *ControllerUnitSuite) TestPopulateOwnerIDAndNamesFrom() {
ctrl := &Controller{ownerLookup: test.rc} ctrl := &Controller{ownerLookup: test.rc}
rID, rName, err := ctrl.PopulateOwnerIDAndNamesFrom(ctx, test.owner, test.ins) rID, rName, err := ctrl.PopulateProtectedResourceIDAndName(ctx, test.owner, test.ins)
test.expectErr(t, err, clues.ToCore(err)) test.expectErr(t, err, clues.ToCore(err))
assert.Equal(t, test.expectID, rID, "id") assert.Equal(t, test.expectID, rID, "id")
assert.Equal(t, test.expectName, rName, "name") assert.Equal(t, test.expectName, rName, "name")
@ -260,6 +264,82 @@ func (suite *ControllerUnitSuite) TestController_Wait() {
assert.Equal(t, int64(4), result.Bytes) assert.Equal(t, int64(4), result.Bytes)
} }
func (suite *ControllerUnitSuite) TestController_CacheItemInfo() {
var (
odid = "od-id"
odname = "od-name"
spid = "sp-id"
spname = "sp-name"
// intentionally declared outside the test loop
ctrl = &Controller{
wg: &sync.WaitGroup{},
region: &trace.Region{},
backupDriveIDNames: idname.NewCache(nil),
}
)
table := []struct {
name string
service path.ServiceType
cat path.CategoryType
dii details.ItemInfo
expectID string
expectName string
}{
{
name: "exchange",
dii: details.ItemInfo{
Exchange: &details.ExchangeInfo{},
},
expectID: "",
expectName: "",
},
{
name: "folder",
dii: details.ItemInfo{
Folder: &details.FolderInfo{},
},
expectID: "",
expectName: "",
},
{
name: "onedrive",
dii: details.ItemInfo{
OneDrive: &details.OneDriveInfo{
DriveID: odid,
DriveName: odname,
},
},
expectID: odid,
expectName: odname,
},
{
name: "sharepoint",
dii: details.ItemInfo{
SharePoint: &details.SharePointInfo{
DriveID: spid,
DriveName: spname,
},
},
expectID: spid,
expectName: spname,
},
}
for _, test := range table {
suite.Run(test.name, func() {
t := suite.T()
ctrl.CacheItemInfo(test.dii)
name, _ := ctrl.backupDriveIDNames.NameOf(test.expectID)
assert.Equal(t, test.expectName, name)
id, _ := ctrl.backupDriveIDNames.IDOf(test.expectName)
assert.Equal(t, test.expectID, id)
})
}
}
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
// Integration tests // Integration tests
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
@ -306,15 +386,19 @@ func (suite *ControllerIntegrationSuite) TestRestoreFailsBadService() {
} }
) )
restoreCfg.IncludePermissions = true
rcc := inject.RestoreConsumerConfig{
BackupVersion: version.Backup,
Options: control.DefaultOptions(),
ProtectedResource: sel,
RestoreConfig: restoreCfg,
Selector: sel,
}
deets, err := suite.ctrl.ConsumeRestoreCollections( deets, err := suite.ctrl.ConsumeRestoreCollections(
ctx, ctx,
version.Backup, rcc,
sel,
restoreCfg,
control.Options{
RestorePermissions: true,
ToggleFeatures: control.Toggles{},
},
nil, nil,
fault.New(true), fault.New(true),
count.New()) count.New())
@ -329,6 +413,8 @@ func (suite *ControllerIntegrationSuite) TestRestoreFailsBadService() {
func (suite *ControllerIntegrationSuite) TestEmptyCollections() { func (suite *ControllerIntegrationSuite) TestEmptyCollections() {
restoreCfg := testdata.DefaultRestoreConfig("") restoreCfg := testdata.DefaultRestoreConfig("")
restoreCfg.IncludePermissions = true
table := []struct { table := []struct {
name string name string
col []data.RestoreCollection col []data.RestoreCollection
@ -385,25 +471,22 @@ func (suite *ControllerIntegrationSuite) TestEmptyCollections() {
ctx, flush := tester.NewContext(t) ctx, flush := tester.NewContext(t)
defer flush() defer flush()
rcc := inject.RestoreConsumerConfig{
BackupVersion: version.Backup,
Options: control.DefaultOptions(),
ProtectedResource: test.sel,
RestoreConfig: restoreCfg,
Selector: test.sel,
}
deets, err := suite.ctrl.ConsumeRestoreCollections( deets, err := suite.ctrl.ConsumeRestoreCollections(
ctx, ctx,
version.Backup, rcc,
test.sel,
restoreCfg,
control.Options{
RestorePermissions: true,
ToggleFeatures: control.Toggles{},
},
test.col, test.col,
fault.New(true), fault.New(true),
count.New()) count.New())
require.NoError(t, err, clues.ToCore(err)) require.Error(t, err, clues.ToCore(err))
assert.NotNil(t, deets) assert.Nil(t, deets)
stats := suite.ctrl.Wait()
assert.Zero(t, stats.Objects)
assert.Zero(t, stats.Folders)
assert.Zero(t, stats.Successes)
}) })
} }
} }
@ -425,16 +508,24 @@ func runRestore(
sci.RestoreCfg.Location, sci.RestoreCfg.Location,
sci.ResourceOwners) sci.ResourceOwners)
sci.RestoreCfg.IncludePermissions = true
start := time.Now() start := time.Now()
restoreCtrl := newController(ctx, t, sci.Resource, path.ExchangeService) restoreCtrl := newController(ctx, t, sci.Resource, path.ExchangeService)
restoreSel := getSelectorWith(t, sci.Service, sci.ResourceOwners, true) restoreSel := getSelectorWith(t, sci.Service, sci.ResourceOwners, true)
rcc := inject.RestoreConsumerConfig{
BackupVersion: backupVersion,
Options: control.DefaultOptions(),
ProtectedResource: restoreSel,
RestoreConfig: sci.RestoreCfg,
Selector: restoreSel,
}
deets, err := restoreCtrl.ConsumeRestoreCollections( deets, err := restoreCtrl.ConsumeRestoreCollections(
ctx, ctx,
backupVersion, rcc,
restoreSel,
sci.RestoreCfg,
sci.Opts,
collections, collections,
fault.New(true), fault.New(true),
count.New()) count.New())
@ -536,6 +627,7 @@ func runRestoreBackupTest(
tenant string, tenant string,
resourceOwners []string, resourceOwners []string,
opts control.Options, opts control.Options,
restoreCfg control.RestoreConfig,
) { ) {
ctx, flush := tester.NewContext(t) ctx, flush := tester.NewContext(t)
defer flush() defer flush()
@ -546,7 +638,7 @@ func runRestoreBackupTest(
Service: test.service, Service: test.service,
Tenant: tenant, Tenant: tenant,
ResourceOwners: resourceOwners, ResourceOwners: resourceOwners,
RestoreCfg: testdata.DefaultRestoreConfig(""), RestoreCfg: restoreCfg,
} }
totalItems, totalKopiaItems, collections, expectedData, err := stub.GetCollectionsAndExpected( totalItems, totalKopiaItems, collections, expectedData, err := stub.GetCollectionsAndExpected(
@ -581,6 +673,7 @@ func runRestoreTestWithVersion(
tenant string, tenant string,
resourceOwners []string, resourceOwners []string,
opts control.Options, opts control.Options,
restoreCfg control.RestoreConfig,
) { ) {
ctx, flush := tester.NewContext(t) ctx, flush := tester.NewContext(t)
defer flush() defer flush()
@ -591,7 +684,7 @@ func runRestoreTestWithVersion(
Service: test.service, Service: test.service,
Tenant: tenant, Tenant: tenant,
ResourceOwners: resourceOwners, ResourceOwners: resourceOwners,
RestoreCfg: testdata.DefaultRestoreConfig(""), RestoreCfg: restoreCfg,
} }
totalItems, _, collections, _, err := stub.GetCollectionsAndExpected( totalItems, _, collections, _, err := stub.GetCollectionsAndExpected(
@ -618,6 +711,7 @@ func runRestoreBackupTestVersions(
tenant string, tenant string,
resourceOwners []string, resourceOwners []string,
opts control.Options, opts control.Options,
restoreCfg control.RestoreConfig,
) { ) {
ctx, flush := tester.NewContext(t) ctx, flush := tester.NewContext(t)
defer flush() defer flush()
@ -628,7 +722,7 @@ func runRestoreBackupTestVersions(
Service: test.service, Service: test.service,
Tenant: tenant, Tenant: tenant,
ResourceOwners: resourceOwners, ResourceOwners: resourceOwners,
RestoreCfg: testdata.DefaultRestoreConfig(""), RestoreCfg: restoreCfg,
} }
totalItems, _, collections, _, err := stub.GetCollectionsAndExpected( totalItems, _, collections, _, err := stub.GetCollectionsAndExpected(
@ -666,6 +760,9 @@ func (suite *ControllerIntegrationSuite) TestRestoreAndBackup() {
bodyText := "This email has some text. However, all the text is on the same line." bodyText := "This email has some text. However, all the text is on the same line."
subjectText := "Test message for restore" subjectText := "Test message for restore"
restoreCfg := control.DefaultRestoreConfig(dttm.HumanReadableDriveItem)
restoreCfg.IncludePermissions = true
table := []restoreBackupInfo{ table := []restoreBackupInfo{
{ {
name: "EmailsWithAttachments", name: "EmailsWithAttachments",
@ -921,10 +1018,8 @@ func (suite *ControllerIntegrationSuite) TestRestoreAndBackup() {
test, test,
suite.ctrl.tenant, suite.ctrl.tenant,
[]string{suite.user}, []string{suite.user},
control.Options{ control.DefaultOptions(),
RestorePermissions: true, restoreCfg)
ToggleFeatures: control.Toggles{},
})
}) })
} }
} }
@ -1005,6 +1100,8 @@ func (suite *ControllerIntegrationSuite) TestMultiFolderBackupDifferentNames() {
for i, collection := range test.collections { for i, collection := range test.collections {
// Get a restoreCfg per collection so they're independent. // Get a restoreCfg per collection so they're independent.
restoreCfg := testdata.DefaultRestoreConfig("") restoreCfg := testdata.DefaultRestoreConfig("")
restoreCfg.IncludePermissions = true
expectedDests = append(expectedDests, destAndCats{ expectedDests = append(expectedDests, destAndCats{
resourceOwner: suite.user, resourceOwner: suite.user,
dest: restoreCfg.Location, dest: restoreCfg.Location,
@ -1037,15 +1134,18 @@ func (suite *ControllerIntegrationSuite) TestMultiFolderBackupDifferentNames() {
) )
restoreCtrl := newController(ctx, t, test.resourceCat, path.ExchangeService) restoreCtrl := newController(ctx, t, test.resourceCat, path.ExchangeService)
rcc := inject.RestoreConsumerConfig{
BackupVersion: version.Backup,
Options: control.DefaultOptions(),
ProtectedResource: restoreSel,
RestoreConfig: restoreCfg,
Selector: restoreSel,
}
deets, err := restoreCtrl.ConsumeRestoreCollections( deets, err := restoreCtrl.ConsumeRestoreCollections(
ctx, ctx,
version.Backup, rcc,
restoreSel,
restoreCfg,
control.Options{
RestorePermissions: true,
ToggleFeatures: control.Toggles{},
},
collections, collections,
fault.New(true), fault.New(true),
count.New()) count.New())
@ -1077,10 +1177,7 @@ func (suite *ControllerIntegrationSuite) TestMultiFolderBackupDifferentNames() {
backupSel, backupSel,
nil, nil,
version.NoBackup, version.NoBackup,
control.Options{ control.DefaultOptions(),
RestorePermissions: true,
ToggleFeatures: control.Toggles{},
},
fault.New(true)) fault.New(true))
require.NoError(t, err, clues.ToCore(err)) require.NoError(t, err, clues.ToCore(err))
assert.True(t, canUsePreviousBackup, "can use previous backup") assert.True(t, canUsePreviousBackup, "can use previous backup")
@ -1089,10 +1186,13 @@ func (suite *ControllerIntegrationSuite) TestMultiFolderBackupDifferentNames() {
t.Log("Backup enumeration complete") t.Log("Backup enumeration complete")
restoreCfg := control.DefaultRestoreConfig(dttm.HumanReadableDriveItem)
restoreCfg.IncludePermissions = true
ci := stub.ConfigInfo{ ci := stub.ConfigInfo{
Opts: control.Options{RestorePermissions: true}, Opts: control.DefaultOptions(),
// Alright to be empty, needed for OneDrive. // Alright to be empty, needed for OneDrive.
RestoreCfg: control.RestoreConfig{}, RestoreCfg: restoreCfg,
} }
// Pull the data prior to waiting for the status as otherwise it will // Pull the data prior to waiting for the status as otherwise it will
@ -1130,16 +1230,16 @@ func (suite *ControllerIntegrationSuite) TestRestoreAndBackup_largeMailAttachmen
}, },
} }
restoreCfg := control.DefaultRestoreConfig(dttm.HumanReadableDriveItem)
restoreCfg.IncludePermissions = true
runRestoreBackupTest( runRestoreBackupTest(
suite.T(), suite.T(),
test, test,
suite.ctrl.tenant, suite.ctrl.tenant,
[]string{suite.user}, []string{suite.user},
control.Options{ control.DefaultOptions(),
RestorePermissions: true, restoreCfg)
ToggleFeatures: control.Toggles{},
},
)
} }
func (suite *ControllerIntegrationSuite) TestBackup_CreatesPrefixCollections() { func (suite *ControllerIntegrationSuite) TestBackup_CreatesPrefixCollections() {
@ -1222,7 +1322,7 @@ func (suite *ControllerIntegrationSuite) TestBackup_CreatesPrefixCollections() {
start = time.Now() start = time.Now()
) )
id, name, err := backupCtrl.PopulateOwnerIDAndNamesFrom(ctx, backupSel.DiscreteOwner, nil) id, name, err := backupCtrl.PopulateProtectedResourceIDAndName(ctx, backupSel.DiscreteOwner, nil)
require.NoError(t, err, clues.ToCore(err)) require.NoError(t, err, clues.ToCore(err))
backupSel.SetDiscreteOwnerIDName(id, name) backupSel.SetDiscreteOwnerIDName(id, name)
@ -1233,10 +1333,7 @@ func (suite *ControllerIntegrationSuite) TestBackup_CreatesPrefixCollections() {
backupSel, backupSel,
nil, nil,
version.NoBackup, version.NoBackup,
control.Options{ control.DefaultOptions(),
RestorePermissions: false,
ToggleFeatures: control.Toggles{},
},
fault.New(true)) fault.New(true))
require.NoError(t, err) require.NoError(t, err)
assert.True(t, canUsePreviousBackup, "can use previous backup") assert.True(t, canUsePreviousBackup, "can use previous backup")

View File

@ -466,7 +466,7 @@ func (suite *BackupIntgSuite) TestMailFetch() {
ctx, flush := tester.NewContext(t) ctx, flush := tester.NewContext(t)
defer flush() defer flush()
ctrlOpts := control.Defaults() ctrlOpts := control.DefaultOptions()
ctrlOpts.ToggleFeatures.DisableDelta = !test.canMakeDeltaQueries ctrlOpts.ToggleFeatures.DisableDelta = !test.canMakeDeltaQueries
collections, err := createCollections( collections, err := createCollections(
@ -554,7 +554,7 @@ func (suite *BackupIntgSuite) TestDelta() {
inMock.NewProvider(userID, userID), inMock.NewProvider(userID, userID),
test.scope, test.scope,
DeltaPaths{}, DeltaPaths{},
control.Defaults(), control.DefaultOptions(),
func(status *support.ControllerOperationStatus) {}, func(status *support.ControllerOperationStatus) {},
fault.New(true)) fault.New(true))
require.NoError(t, err, clues.ToCore(err)) require.NoError(t, err, clues.ToCore(err))
@ -587,7 +587,7 @@ func (suite *BackupIntgSuite) TestDelta() {
inMock.NewProvider(userID, userID), inMock.NewProvider(userID, userID),
test.scope, test.scope,
dps, dps,
control.Defaults(), control.DefaultOptions(),
func(status *support.ControllerOperationStatus) {}, func(status *support.ControllerOperationStatus) {},
fault.New(true)) fault.New(true))
require.NoError(t, err, clues.ToCore(err)) require.NoError(t, err, clues.ToCore(err))
@ -633,7 +633,7 @@ func (suite *BackupIntgSuite) TestMailSerializationRegression() {
inMock.NewProvider(suite.user, suite.user), inMock.NewProvider(suite.user, suite.user),
sel.Scopes()[0], sel.Scopes()[0],
DeltaPaths{}, DeltaPaths{},
control.Defaults(), control.DefaultOptions(),
newStatusUpdater(t, &wg), newStatusUpdater(t, &wg),
fault.New(true)) fault.New(true))
require.NoError(t, err, clues.ToCore(err)) require.NoError(t, err, clues.ToCore(err))
@ -709,7 +709,7 @@ func (suite *BackupIntgSuite) TestContactSerializationRegression() {
inMock.NewProvider(suite.user, suite.user), inMock.NewProvider(suite.user, suite.user),
test.scope, test.scope,
DeltaPaths{}, DeltaPaths{},
control.Defaults(), control.DefaultOptions(),
newStatusUpdater(t, &wg), newStatusUpdater(t, &wg),
fault.New(true)) fault.New(true))
require.NoError(t, err, clues.ToCore(err)) require.NoError(t, err, clues.ToCore(err))
@ -834,7 +834,7 @@ func (suite *BackupIntgSuite) TestEventsSerializationRegression() {
inMock.NewProvider(suite.user, suite.user), inMock.NewProvider(suite.user, suite.user),
test.scope, test.scope,
DeltaPaths{}, DeltaPaths{},
control.Defaults(), control.DefaultOptions(),
newStatusUpdater(t, &wg), newStatusUpdater(t, &wg),
fault.New(true)) fault.New(true))
require.NoError(t, err, clues.ToCore(err)) require.NoError(t, err, clues.ToCore(err))
@ -1995,7 +1995,7 @@ func (suite *CollectionPopulationSuite) TestFilterContainersAndFillCollections_i
ctx, flush := tester.NewContext(t) ctx, flush := tester.NewContext(t)
defer flush() defer flush()
ctrlOpts := control.Defaults() ctrlOpts := control.DefaultOptions()
ctrlOpts.ToggleFeatures.DisableDelta = !deltaAfter ctrlOpts.ToggleFeatures.DisableDelta = !deltaAfter
getter := test.getter getter := test.getter

View File

@ -178,7 +178,7 @@ func (suite *CollectionSuite) TestNewCollection_state() {
test.curr, test.prev, test.loc, test.curr, test.prev, test.loc,
0, 0,
&mockItemer{}, nil, &mockItemer{}, nil,
control.Defaults(), control.DefaultOptions(),
false) false)
assert.Equal(t, test.expect, c.State(), "collection state") assert.Equal(t, test.expect, c.State(), "collection state")
assert.Equal(t, test.curr, c.fullPath, "full path") assert.Equal(t, test.curr, c.fullPath, "full path")

View File

@ -14,6 +14,7 @@ import (
"github.com/alcionai/corso/src/internal/m365/graph" "github.com/alcionai/corso/src/internal/m365/graph"
"github.com/alcionai/corso/src/internal/m365/support" "github.com/alcionai/corso/src/internal/m365/support"
"github.com/alcionai/corso/src/internal/observe" "github.com/alcionai/corso/src/internal/observe"
"github.com/alcionai/corso/src/internal/operations/inject"
"github.com/alcionai/corso/src/pkg/backup/details" "github.com/alcionai/corso/src/pkg/backup/details"
"github.com/alcionai/corso/src/pkg/control" "github.com/alcionai/corso/src/pkg/control"
"github.com/alcionai/corso/src/pkg/count" "github.com/alcionai/corso/src/pkg/count"
@ -28,7 +29,7 @@ import (
func ConsumeRestoreCollections( func ConsumeRestoreCollections(
ctx context.Context, ctx context.Context,
ac api.Client, ac api.Client,
restoreCfg control.RestoreConfig, rcc inject.RestoreConsumerConfig,
dcs []data.RestoreCollection, dcs []data.RestoreCollection,
deets *details.Builder, deets *details.Builder,
errs *fault.Bus, errs *fault.Bus,
@ -39,16 +40,13 @@ func ConsumeRestoreCollections(
} }
var ( var (
userID = dcs[0].FullPath().ResourceOwner() userID = rcc.ProtectedResource.ID()
directoryCache = make(map[path.CategoryType]graph.ContainerResolver) directoryCache = make(map[path.CategoryType]graph.ContainerResolver)
handlers = restoreHandlers(ac) handlers = restoreHandlers(ac)
metrics support.CollectionMetrics metrics support.CollectionMetrics
el = errs.Local() el = errs.Local()
) )
// FIXME: should be user name
ctx = clues.Add(ctx, "resource_owner", clues.Hide(userID))
for _, dc := range dcs { for _, dc := range dcs {
if el.Failure() != nil { if el.Failure() != nil {
break break
@ -80,7 +78,7 @@ func ConsumeRestoreCollections(
containerID, gcc, err := createDestination( containerID, gcc, err := createDestination(
ictx, ictx,
handler, handler,
handler.formatRestoreDestination(restoreCfg.Location, dc.FullPath()), handler.formatRestoreDestination(rcc.RestoreConfig.Location, dc.FullPath()),
userID, userID,
directoryCache[category], directoryCache[category],
errs) errs)
@ -105,7 +103,7 @@ func ConsumeRestoreCollections(
userID, userID,
containerID, containerID,
collisionKeyToItemID, collisionKeyToItemID,
restoreCfg.OnCollision, rcc.RestoreConfig.OnCollision,
deets, deets,
errs, errs,
ctr) ctr)
@ -126,7 +124,7 @@ func ConsumeRestoreCollections(
support.Restore, support.Restore,
len(dcs), len(dcs),
metrics, metrics,
restoreCfg.Location) rcc.RestoreConfig.Location)
return status, el.Failure() return status, el.Failure()
} }

View File

@ -271,7 +271,9 @@ func Wrap(ctx context.Context, e error, msg string) *clues.Err {
e = clues.Stack(e, clues.New(mainMsg)) e = clues.Stack(e, clues.New(mainMsg))
} }
return setLabels(clues.Wrap(e, msg).WithClues(ctx).With(data...), innerMsg) ce := clues.Wrap(e, msg).WithClues(ctx).With(data...).WithTrace(1)
return setLabels(ce, innerMsg)
} }
// Stack is a helper function that extracts ODataError metadata from // Stack is a helper function that extracts ODataError metadata from
@ -292,7 +294,9 @@ func Stack(ctx context.Context, e error) *clues.Err {
e = clues.Stack(e, clues.New(mainMsg)) e = clues.Stack(e, clues.New(mainMsg))
} }
return setLabels(clues.Stack(e).WithClues(ctx).With(data...), innerMsg) ce := clues.Stack(e).WithClues(ctx).With(data...).WithTrace(1)
return setLabels(ce, innerMsg)
} }
// stackReq is a helper function that extracts ODataError metadata from // stackReq is a helper function that extracts ODataError metadata from

View File

@ -796,7 +796,7 @@ func compareDriveItem(
assert.Equal(t, expectedMeta.FileName, itemMeta.FileName) assert.Equal(t, expectedMeta.FileName, itemMeta.FileName)
} }
if !mci.Opts.RestorePermissions { if !mci.RestoreCfg.IncludePermissions {
assert.Equal(t, 0, len(itemMeta.Permissions)) assert.Equal(t, 0, len(itemMeta.Permissions))
return true return true
} }

View File

@ -26,6 +26,10 @@ type Controller struct {
Err error Err error
Stats data.CollectionStats Stats data.CollectionStats
ProtectedResourceID string
ProtectedResourceName string
ProtectedResourceErr error
} }
func (ctrl Controller) ProduceBackupCollections( func (ctrl Controller) ProduceBackupCollections(
@ -59,13 +63,22 @@ func (ctrl Controller) Wait() *data.CollectionStats {
func (ctrl Controller) ConsumeRestoreCollections( func (ctrl Controller) ConsumeRestoreCollections(
_ context.Context, _ context.Context,
_ int, _ inject.RestoreConsumerConfig,
_ selectors.Selector,
_ control.RestoreConfig,
_ control.Options,
_ []data.RestoreCollection, _ []data.RestoreCollection,
_ *fault.Bus, _ *fault.Bus,
_ *count.Bus, _ *count.Bus,
) (*details.Details, error) { ) (*details.Details, error) {
return ctrl.Deets, ctrl.Err return ctrl.Deets, ctrl.Err
} }
func (ctrl Controller) CacheItemInfo(dii details.ItemInfo) {}
func (ctrl Controller) PopulateProtectedResourceIDAndName(
ctx context.Context,
protectedResource string, // input value, can be either id or name
ins idname.Cacher,
) (string, string, error) {
return ctrl.ProtectedResourceID,
ctrl.ProtectedResourceName,
ctrl.ProtectedResourceErr
}

View File

@ -945,7 +945,7 @@ func (suite *CollectionUnitTestSuite) TestItemExtensions() {
nil, nil,
} }
opts := control.Defaults() opts := control.DefaultOptions()
opts.ItemExtensionFactory = append( opts.ItemExtensionFactory = append(
opts.ItemExtensionFactory, opts.ItemExtensionFactory,
test.factories...) test.factories...)

View File

@ -35,6 +35,7 @@ type BackupHandler interface {
api.Getter api.Getter
GetItemPermissioner GetItemPermissioner
GetItemer GetItemer
NewDrivePagerer
// PathPrefix constructs the service and category specific path prefix for // PathPrefix constructs the service and category specific path prefix for
// the given values. // the given values.
@ -49,7 +50,6 @@ type BackupHandler interface {
// ServiceCat returns the service and category used by this implementation. // ServiceCat returns the service and category used by this implementation.
ServiceCat() (path.ServiceType, path.CategoryType) ServiceCat() (path.ServiceType, path.CategoryType)
NewDrivePager(resourceOwner string, fields []string) api.DrivePager
NewItemPager(driveID, link string, fields []string) api.DriveItemDeltaEnumerator NewItemPager(driveID, link string, fields []string) api.DriveItemDeltaEnumerator
// FormatDisplayPath creates a human-readable string to represent the // FormatDisplayPath creates a human-readable string to represent the
// provided path. // provided path.
@ -61,6 +61,10 @@ type BackupHandler interface {
IncludesDir(dir string) bool IncludesDir(dir string) bool
} }
type NewDrivePagerer interface {
NewDrivePager(resourceOwner string, fields []string) api.DrivePager
}
type GetItemPermissioner interface { type GetItemPermissioner interface {
GetItemPermission( GetItemPermission(
ctx context.Context, ctx context.Context,
@ -86,7 +90,9 @@ type RestoreHandler interface {
GetItemsByCollisionKeyser GetItemsByCollisionKeyser
GetRootFolderer GetRootFolderer
ItemInfoAugmenter ItemInfoAugmenter
NewDrivePagerer
NewItemContentUploader NewItemContentUploader
PostDriver
PostItemInContainerer PostItemInContainerer
DeleteItemPermissioner DeleteItemPermissioner
UpdateItemPermissioner UpdateItemPermissioner
@ -145,6 +151,13 @@ type UpdateItemLinkSharer interface {
) (models.Permissionable, error) ) (models.Permissionable, error)
} }
type PostDriver interface {
PostDrive(
ctx context.Context,
protectedResourceID, driveName string,
) (models.Driveable, error)
}
type PostItemInContainerer interface { type PostItemInContainerer interface {
PostItemInContainer( PostItemInContainer(
ctx context.Context, ctx context.Context,

View File

@ -361,8 +361,8 @@ func (suite *OneDriveIntgSuite) TestCreateGetDeleteFolder() {
Folders: folderElements, Folders: folderElements,
} }
caches := NewRestoreCaches() caches := NewRestoreCaches(nil)
caches.DriveIDToRootFolderID[driveID] = ptr.Val(rootFolder.GetId()) caches.DriveIDToDriveInfo[driveID] = driveInfo{rootFolderID: ptr.Val(rootFolder.GetId())}
rh := NewRestoreHandler(suite.ac) rh := NewRestoreHandler(suite.ac)

View File

@ -5,6 +5,7 @@ import (
"net/http" "net/http"
"strings" "strings"
"github.com/alcionai/clues"
"github.com/microsoftgraph/msgraph-sdk-go/drives" "github.com/microsoftgraph/msgraph-sdk-go/drives"
"github.com/microsoftgraph/msgraph-sdk-go/models" "github.com/microsoftgraph/msgraph-sdk-go/models"
@ -133,6 +134,19 @@ func NewRestoreHandler(ac api.Client) *itemRestoreHandler {
return &itemRestoreHandler{ac.Drives()} return &itemRestoreHandler{ac.Drives()}
} }
func (h itemRestoreHandler) PostDrive(
context.Context,
string, string,
) (models.Driveable, error) {
return nil, clues.New("creating drives in oneDrive is not supported")
}
func (h itemRestoreHandler) NewDrivePager(
resourceOwner string, fields []string,
) api.DrivePager {
return h.ac.NewUserDrivePager(resourceOwner, fields)
}
// AugmentItemInfo will populate a details.OneDriveInfo struct // AugmentItemInfo will populate a details.OneDriveInfo struct
// with properties from the drive item. ItemSize is specified // with properties from the drive item. ItemSize is specified
// separately for restore processes because the local itemable // separately for restore processes because the local itemable

View File

@ -249,9 +249,25 @@ type RestoreHandler struct {
PostItemResp models.DriveItemable PostItemResp models.DriveItemable
PostItemErr error PostItemErr error
DrivePagerV api.DrivePager
PostDriveResp models.Driveable
PostDriveErr error
UploadSessionErr error UploadSessionErr error
} }
func (h RestoreHandler) PostDrive(
ctx context.Context,
protectedResourceID, driveName string,
) (models.Driveable, error) {
return h.PostDriveResp, h.PostDriveErr
}
func (h RestoreHandler) NewDrivePager(string, []string) api.DrivePager {
return h.DrivePagerV
}
func (h *RestoreHandler) AugmentItemInfo( func (h *RestoreHandler) AugmentItemInfo(
details.ItemInfo, details.ItemInfo,
models.DriveItemable, models.DriveItemable,

View File

@ -15,6 +15,7 @@ import (
"github.com/microsoftgraph/msgraph-sdk-go/models" "github.com/microsoftgraph/msgraph-sdk-go/models"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/alcionai/corso/src/internal/common/idname"
"github.com/alcionai/corso/src/internal/common/ptr" "github.com/alcionai/corso/src/internal/common/ptr"
"github.com/alcionai/corso/src/internal/data" "github.com/alcionai/corso/src/internal/data"
"github.com/alcionai/corso/src/internal/diagnostics" "github.com/alcionai/corso/src/internal/diagnostics"
@ -22,6 +23,7 @@ import (
"github.com/alcionai/corso/src/internal/m365/onedrive/metadata" "github.com/alcionai/corso/src/internal/m365/onedrive/metadata"
"github.com/alcionai/corso/src/internal/m365/support" "github.com/alcionai/corso/src/internal/m365/support"
"github.com/alcionai/corso/src/internal/observe" "github.com/alcionai/corso/src/internal/observe"
"github.com/alcionai/corso/src/internal/operations/inject"
"github.com/alcionai/corso/src/internal/version" "github.com/alcionai/corso/src/internal/version"
"github.com/alcionai/corso/src/pkg/backup/details" "github.com/alcionai/corso/src/pkg/backup/details"
"github.com/alcionai/corso/src/pkg/control" "github.com/alcionai/corso/src/pkg/control"
@ -37,54 +39,30 @@ const (
maxUploadRetries = 3 maxUploadRetries = 3
) )
type restoreCaches struct {
collisionKeyToItemID map[string]api.DriveItemIDType
DriveIDToRootFolderID map[string]string
Folders *folderCache
OldLinkShareIDToNewID map[string]string
OldPermIDToNewID map[string]string
ParentDirToMeta map[string]metadata.Metadata
pool sync.Pool
}
func NewRestoreCaches() *restoreCaches {
return &restoreCaches{
collisionKeyToItemID: map[string]api.DriveItemIDType{},
DriveIDToRootFolderID: map[string]string{},
Folders: NewFolderCache(),
OldLinkShareIDToNewID: map[string]string{},
OldPermIDToNewID: map[string]string{},
ParentDirToMeta: map[string]metadata.Metadata{},
// Buffer pool for uploads
pool: sync.Pool{
New: func() any {
b := make([]byte, graph.CopyBufferSize)
return &b
},
},
}
}
// ConsumeRestoreCollections will restore the specified data collections into OneDrive // ConsumeRestoreCollections will restore the specified data collections into OneDrive
func ConsumeRestoreCollections( func ConsumeRestoreCollections(
ctx context.Context, ctx context.Context,
rh RestoreHandler, rh RestoreHandler,
backupVersion int, rcc inject.RestoreConsumerConfig,
restoreCfg control.RestoreConfig, backupDriveIDNames idname.Cacher,
opts control.Options,
dcs []data.RestoreCollection, dcs []data.RestoreCollection,
deets *details.Builder, deets *details.Builder,
errs *fault.Bus, errs *fault.Bus,
ctr *count.Bus, ctr *count.Bus,
) (*support.ControllerOperationStatus, error) { ) (*support.ControllerOperationStatus, error) {
var ( var (
restoreMetrics support.CollectionMetrics restoreMetrics support.CollectionMetrics
caches = NewRestoreCaches() el = errs.Local()
el = errs.Local() caches = NewRestoreCaches(backupDriveIDNames)
fallbackDriveName = "" // onedrive cannot create drives
) )
ctx = clues.Add(ctx, "backup_version", backupVersion) ctx = clues.Add(ctx, "backup_version", rcc.BackupVersion)
err := caches.Populate(ctx, rh, rcc.ProtectedResource.ID())
if err != nil {
return nil, clues.Wrap(err, "initializing restore caches")
}
// Reorder collections so that the parents directories are created // Reorder collections so that the parents directories are created
// before the child directories; a requirement for permissions. // before the child directories; a requirement for permissions.
@ -102,19 +80,17 @@ func ConsumeRestoreCollections(
ictx = clues.Add( ictx = clues.Add(
ctx, ctx,
"category", dc.FullPath().Category(), "category", dc.FullPath().Category(),
"resource_owner", clues.Hide(dc.FullPath().ResourceOwner()),
"full_path", dc.FullPath()) "full_path", dc.FullPath())
) )
metrics, err = RestoreCollection( metrics, err = RestoreCollection(
ictx, ictx,
rh, rh,
restoreCfg, rcc,
backupVersion,
dc, dc,
caches, caches,
deets, deets,
opts.RestorePermissions, fallbackDriveName,
errs, errs,
ctr.Local()) ctr.Local())
if err != nil { if err != nil {
@ -133,7 +109,7 @@ func ConsumeRestoreCollections(
support.Restore, support.Restore,
len(dcs), len(dcs),
restoreMetrics, restoreMetrics,
restoreCfg.Location) rcc.RestoreConfig.Location)
return status, el.Failure() return status, el.Failure()
} }
@ -146,12 +122,11 @@ func ConsumeRestoreCollections(
func RestoreCollection( func RestoreCollection(
ctx context.Context, ctx context.Context,
rh RestoreHandler, rh RestoreHandler,
restoreCfg control.RestoreConfig, rcc inject.RestoreConsumerConfig,
backupVersion int,
dc data.RestoreCollection, dc data.RestoreCollection,
caches *restoreCaches, caches *restoreCaches,
deets *details.Builder, deets *details.Builder,
restorePerms bool, // TODD: move into restoreConfig fallbackDriveName string,
errs *fault.Bus, errs *fault.Bus,
ctr *count.Bus, ctr *count.Bus,
) (support.CollectionMetrics, error) { ) (support.CollectionMetrics, error) {
@ -174,23 +149,31 @@ func RestoreCollection(
return metrics, clues.Wrap(err, "creating drive path").WithClues(ctx) return metrics, clues.Wrap(err, "creating drive path").WithClues(ctx)
} }
if _, ok := caches.DriveIDToRootFolderID[drivePath.DriveID]; !ok { di, err := ensureDriveExists(
root, err := rh.GetRootFolder(ctx, drivePath.DriveID) ctx,
if err != nil { rh,
return metrics, clues.Wrap(err, "getting drive root id") caches,
} drivePath,
rcc.ProtectedResource.ID(),
caches.DriveIDToRootFolderID[drivePath.DriveID] = ptr.Val(root.GetId()) fallbackDriveName)
if err != nil {
return metrics, clues.Wrap(err, "ensuring drive exists")
} }
// clobber the drivePath details with the details retrieved
// in the ensure func, as they might have changed to reflect
// a different drive as a restore location.
drivePath.DriveID = di.id
drivePath.Root = di.rootFolderID
// Assemble folder hierarchy we're going to restore into (we recreate the folder hierarchy // Assemble folder hierarchy we're going to restore into (we recreate the folder hierarchy
// from the backup under this the restore folder instead of root) // from the backup under this the restore folder instead of root)
// i.e. Restore into `<restoreContainerName>/<original folder path>` // i.e. Restore into `<restoreContainerName>/<original folder path>`
// the drive into which this folder gets restored is tracked separately in drivePath. // the drive into which this folder gets restored is tracked separately in drivePath.
restoreDir := &path.Builder{} restoreDir := &path.Builder{}
if len(restoreCfg.Location) > 0 { if len(rcc.RestoreConfig.Location) > 0 {
restoreDir = restoreDir.Append(restoreCfg.Location) restoreDir = restoreDir.Append(rcc.RestoreConfig.Location)
} }
restoreDir = restoreDir.Append(drivePath.Folders...) restoreDir = restoreDir.Append(drivePath.Folders...)
@ -209,8 +192,8 @@ func RestoreCollection(
drivePath, drivePath,
dc, dc,
caches, caches,
backupVersion, rcc.BackupVersion,
restorePerms) rcc.RestoreConfig.IncludePermissions)
if err != nil { if err != nil {
return metrics, clues.Wrap(err, "getting permissions").WithClues(ctx) return metrics, clues.Wrap(err, "getting permissions").WithClues(ctx)
} }
@ -224,7 +207,7 @@ func RestoreCollection(
dc.FullPath(), dc.FullPath(),
colMeta, colMeta,
caches, caches,
restorePerms) rcc.RestoreConfig.IncludePermissions)
if err != nil { if err != nil {
return metrics, clues.Wrap(err, "creating folders for restore") return metrics, clues.Wrap(err, "creating folders for restore")
} }
@ -298,14 +281,12 @@ func RestoreCollection(
itemInfo, skipped, err := restoreItem( itemInfo, skipped, err := restoreItem(
ictx, ictx,
rh, rh,
restoreCfg, rcc,
dc, dc,
backupVersion,
drivePath, drivePath,
restoreFolderID, restoreFolderID,
copyBuffer, copyBuffer,
caches, caches,
restorePerms,
itemData, itemData,
itemPath, itemPath,
ctr) ctr)
@ -348,14 +329,12 @@ func RestoreCollection(
func restoreItem( func restoreItem(
ctx context.Context, ctx context.Context,
rh RestoreHandler, rh RestoreHandler,
restoreCfg control.RestoreConfig, rcc inject.RestoreConsumerConfig,
fibn data.FetchItemByNamer, fibn data.FetchItemByNamer,
backupVersion int,
drivePath *path.DrivePath, drivePath *path.DrivePath,
restoreFolderID string, restoreFolderID string,
copyBuffer []byte, copyBuffer []byte,
caches *restoreCaches, caches *restoreCaches,
restorePerms bool,
itemData data.Stream, itemData data.Stream,
itemPath path.Path, itemPath path.Path,
ctr *count.Bus, ctr *count.Bus,
@ -363,11 +342,11 @@ func restoreItem(
itemUUID := itemData.UUID() itemUUID := itemData.UUID()
ctx = clues.Add(ctx, "item_id", itemUUID) ctx = clues.Add(ctx, "item_id", itemUUID)
if backupVersion < version.OneDrive1DataAndMetaFiles { if rcc.BackupVersion < version.OneDrive1DataAndMetaFiles {
itemInfo, err := restoreV0File( itemInfo, err := restoreV0File(
ctx, ctx,
rh, rh,
restoreCfg, rcc.RestoreConfig,
drivePath, drivePath,
fibn, fibn,
restoreFolderID, restoreFolderID,
@ -376,7 +355,7 @@ func restoreItem(
itemData, itemData,
ctr) ctr)
if err != nil { if err != nil {
if errors.Is(err, graph.ErrItemAlreadyExistsConflict) && restoreCfg.OnCollision == control.Skip { if errors.Is(err, graph.ErrItemAlreadyExistsConflict) && rcc.RestoreConfig.OnCollision == control.Skip {
return details.ItemInfo{}, true, nil return details.ItemInfo{}, true, nil
} }
@ -399,7 +378,7 @@ func restoreItem(
// Only the version.OneDrive1DataAndMetaFiles needed to deserialize the // Only the version.OneDrive1DataAndMetaFiles needed to deserialize the
// permission for child folders here. Later versions can request // permission for child folders here. Later versions can request
// permissions inline when processing the collection. // permissions inline when processing the collection.
if !restorePerms || backupVersion >= version.OneDrive4DirIncludesPermissions { if !rcc.RestoreConfig.IncludePermissions || rcc.BackupVersion >= version.OneDrive4DirIncludesPermissions {
return details.ItemInfo{}, true, nil return details.ItemInfo{}, true, nil
} }
@ -419,22 +398,21 @@ func restoreItem(
// only items with DataFileSuffix from this point on // only items with DataFileSuffix from this point on
if backupVersion < version.OneDrive6NameInMeta { if rcc.BackupVersion < version.OneDrive6NameInMeta {
itemInfo, err := restoreV1File( itemInfo, err := restoreV1File(
ctx, ctx,
rh, rh,
restoreCfg, rcc,
drivePath, drivePath,
fibn, fibn,
restoreFolderID, restoreFolderID,
copyBuffer, copyBuffer,
restorePerms,
caches, caches,
itemPath, itemPath,
itemData, itemData,
ctr) ctr)
if err != nil { if err != nil {
if errors.Is(err, graph.ErrItemAlreadyExistsConflict) && restoreCfg.OnCollision == control.Skip { if errors.Is(err, graph.ErrItemAlreadyExistsConflict) && rcc.RestoreConfig.OnCollision == control.Skip {
return details.ItemInfo{}, true, nil return details.ItemInfo{}, true, nil
} }
@ -449,18 +427,17 @@ func restoreItem(
itemInfo, err := restoreV6File( itemInfo, err := restoreV6File(
ctx, ctx,
rh, rh,
restoreCfg, rcc,
drivePath, drivePath,
fibn, fibn,
restoreFolderID, restoreFolderID,
copyBuffer, copyBuffer,
restorePerms,
caches, caches,
itemPath, itemPath,
itemData, itemData,
ctr) ctr)
if err != nil { if err != nil {
if errors.Is(err, graph.ErrItemAlreadyExistsConflict) && restoreCfg.OnCollision == control.Skip { if errors.Is(err, graph.ErrItemAlreadyExistsConflict) && rcc.RestoreConfig.OnCollision == control.Skip {
return details.ItemInfo{}, true, nil return details.ItemInfo{}, true, nil
} }
@ -504,12 +481,11 @@ func restoreV0File(
func restoreV1File( func restoreV1File(
ctx context.Context, ctx context.Context,
rh RestoreHandler, rh RestoreHandler,
restoreCfg control.RestoreConfig, rcc inject.RestoreConsumerConfig,
drivePath *path.DrivePath, drivePath *path.DrivePath,
fibn data.FetchItemByNamer, fibn data.FetchItemByNamer,
restoreFolderID string, restoreFolderID string,
copyBuffer []byte, copyBuffer []byte,
restorePerms bool,
caches *restoreCaches, caches *restoreCaches,
itemPath path.Path, itemPath path.Path,
itemData data.Stream, itemData data.Stream,
@ -519,7 +495,7 @@ func restoreV1File(
itemID, itemInfo, err := restoreFile( itemID, itemInfo, err := restoreFile(
ctx, ctx,
restoreCfg, rcc.RestoreConfig,
rh, rh,
fibn, fibn,
trimmedName, trimmedName,
@ -535,7 +511,7 @@ func restoreV1File(
// Mark it as success without processing .meta // Mark it as success without processing .meta
// file if we are not restoring permissions // file if we are not restoring permissions
if !restorePerms { if !rcc.RestoreConfig.IncludePermissions {
return itemInfo, nil return itemInfo, nil
} }
@ -565,12 +541,11 @@ func restoreV1File(
func restoreV6File( func restoreV6File(
ctx context.Context, ctx context.Context,
rh RestoreHandler, rh RestoreHandler,
restoreCfg control.RestoreConfig, rcc inject.RestoreConsumerConfig,
drivePath *path.DrivePath, drivePath *path.DrivePath,
fibn data.FetchItemByNamer, fibn data.FetchItemByNamer,
restoreFolderID string, restoreFolderID string,
copyBuffer []byte, copyBuffer []byte,
restorePerms bool,
caches *restoreCaches, caches *restoreCaches,
itemPath path.Path, itemPath path.Path,
itemData data.Stream, itemData data.Stream,
@ -604,7 +579,7 @@ func restoreV6File(
itemID, itemInfo, err := restoreFile( itemID, itemInfo, err := restoreFile(
ctx, ctx,
restoreCfg, rcc.RestoreConfig,
rh, rh,
fibn, fibn,
meta.FileName, meta.FileName,
@ -620,7 +595,7 @@ func restoreV6File(
// Mark it as success without processing .meta // Mark it as success without processing .meta
// file if we are not restoring permissions // file if we are not restoring permissions
if !restorePerms { if !rcc.RestoreConfig.IncludePermissions {
return itemInfo, nil return itemInfo, nil
} }
@ -704,7 +679,7 @@ func createRestoreFolders(
driveID = drivePath.DriveID driveID = drivePath.DriveID
folders = restoreDir.Elements() folders = restoreDir.Elements()
location = path.Builder{}.Append(driveID) location = path.Builder{}.Append(driveID)
parentFolderID = caches.DriveIDToRootFolderID[drivePath.DriveID] parentFolderID = caches.DriveIDToDriveInfo[drivePath.DriveID].rootFolderID
) )
ctx = clues.Add( ctx = clues.Add(
@ -1113,3 +1088,79 @@ func AugmentRestorePaths(
return paths, nil return paths, nil
} }
type PostDriveAndGetRootFolderer interface {
PostDriver
GetRootFolderer
}
// ensureDriveExists looks up the drive by its id. If no drive is found with
// that ID, a new drive is generated with the same name. If the name collides
// with an existing drive, a number is appended to the drive name. Eg: foo ->
// foo 1. This will repeat as many times as is needed.
// Returns the root folder of the drive
func ensureDriveExists(
ctx context.Context,
pdagrf PostDriveAndGetRootFolderer,
caches *restoreCaches,
drivePath *path.DrivePath,
protectedResourceID, fallbackDriveName string,
) (driveInfo, error) {
driveID := drivePath.DriveID
// the drive might already be cached by ID. it's okay
// if the name has changed. the ID is a better reference
// anyway.
if di, ok := caches.DriveIDToDriveInfo[driveID]; ok {
return di, nil
}
var (
newDriveName = fallbackDriveName
newDrive models.Driveable
err error
)
// if the drive wasn't found by ID, maybe we can find a
// drive with the same name but different ID.
// start by looking up the old drive's name
oldName, ok := caches.BackupDriveIDName.NameOf(driveID)
if ok {
// check for drives that currently have the same name
if di, ok := caches.DriveNameToDriveInfo[oldName]; ok {
return di, nil
}
// if no current drives have the same name, we'll make
// a new drive with that name.
newDriveName = oldName
}
nextDriveName := newDriveName
// For sharepoint, document libraries can collide by name with
// item types beyond just drive. Lists, for example, cannot share
// names with document libraries (they're the same type, actually).
// In those cases we need to rename the drive until we can create
// one without a collision.
for i := 1; ; i++ {
ictx := clues.Add(ctx, "new_drive_name", clues.Hide(nextDriveName))
newDrive, err = pdagrf.PostDrive(ictx, protectedResourceID, nextDriveName)
if err != nil && !errors.Is(err, graph.ErrItemAlreadyExistsConflict) {
return driveInfo{}, clues.Wrap(err, "creating new drive")
}
if err == nil {
break
}
nextDriveName = fmt.Sprintf("%s %d", newDriveName, i)
}
if err := caches.AddDrive(ctx, newDrive, pdagrf); err != nil {
return driveInfo{}, clues.Wrap(err, "adding drive to cache").OrNil()
}
return caches.DriveIDToDriveInfo[ptr.Val(newDrive.GetId())], nil
}

View File

@ -0,0 +1,116 @@
package onedrive
import (
"context"
"sync"
"github.com/alcionai/clues"
"github.com/microsoftgraph/msgraph-sdk-go/models"
"github.com/alcionai/corso/src/internal/common/idname"
"github.com/alcionai/corso/src/internal/common/ptr"
"github.com/alcionai/corso/src/internal/m365/graph"
"github.com/alcionai/corso/src/internal/m365/onedrive/metadata"
"github.com/alcionai/corso/src/pkg/services/m365/api"
)
type driveInfo struct {
id string
name string
rootFolderID string
}
type restoreCaches struct {
BackupDriveIDName idname.Cacher
collisionKeyToItemID map[string]api.DriveItemIDType
DriveIDToDriveInfo map[string]driveInfo
DriveNameToDriveInfo map[string]driveInfo
Folders *folderCache
OldLinkShareIDToNewID map[string]string
OldPermIDToNewID map[string]string
ParentDirToMeta map[string]metadata.Metadata
pool sync.Pool
}
func (rc *restoreCaches) AddDrive(
ctx context.Context,
md models.Driveable,
grf GetRootFolderer,
) error {
di := driveInfo{
id: ptr.Val(md.GetId()),
name: ptr.Val(md.GetName()),
}
ctx = clues.Add(ctx, "drive_info", di)
root, err := grf.GetRootFolder(ctx, di.id)
if err != nil {
return clues.Wrap(err, "getting drive root id")
}
di.rootFolderID = ptr.Val(root.GetId())
rc.DriveIDToDriveInfo[di.id] = di
rc.DriveNameToDriveInfo[di.name] = di
return nil
}
// Populate looks up drive items available to the protectedResource
// and adds their info to the caches.
func (rc *restoreCaches) Populate(
ctx context.Context,
gdparf GetDrivePagerAndRootFolderer,
protectedResourceID string,
) error {
drives, err := api.GetAllDrives(
ctx,
gdparf.NewDrivePager(protectedResourceID, nil),
true,
maxDrivesRetries)
if err != nil {
return clues.Wrap(err, "getting drives")
}
for _, md := range drives {
if err := rc.AddDrive(ctx, md, gdparf); err != nil {
return clues.Wrap(err, "caching drive")
}
}
return nil
}
type GetDrivePagerAndRootFolderer interface {
GetRootFolderer
NewDrivePagerer
}
func NewRestoreCaches(
backupDriveIDNames idname.Cacher,
) *restoreCaches {
// avoid nil panics
if backupDriveIDNames == nil {
backupDriveIDNames = idname.NewCache(nil)
}
return &restoreCaches{
BackupDriveIDName: backupDriveIDNames,
collisionKeyToItemID: map[string]api.DriveItemIDType{},
DriveIDToDriveInfo: map[string]driveInfo{},
DriveNameToDriveInfo: map[string]driveInfo{},
Folders: NewFolderCache(),
OldLinkShareIDToNewID: map[string]string{},
OldPermIDToNewID: map[string]string{},
ParentDirToMeta: map[string]metadata.Metadata{},
// Buffer pool for uploads
pool: sync.Pool{
New: func() any {
b := make([]byte, graph.CopyBufferSize)
return &b
},
},
}
}

View File

@ -11,16 +11,19 @@ import (
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"github.com/stretchr/testify/suite" "github.com/stretchr/testify/suite"
"github.com/alcionai/corso/src/internal/common/idname"
"github.com/alcionai/corso/src/internal/common/ptr" "github.com/alcionai/corso/src/internal/common/ptr"
"github.com/alcionai/corso/src/internal/m365/graph" "github.com/alcionai/corso/src/internal/m365/graph"
odConsts "github.com/alcionai/corso/src/internal/m365/onedrive/consts" odConsts "github.com/alcionai/corso/src/internal/m365/onedrive/consts"
"github.com/alcionai/corso/src/internal/m365/onedrive/mock" "github.com/alcionai/corso/src/internal/m365/onedrive/mock"
"github.com/alcionai/corso/src/internal/operations/inject"
"github.com/alcionai/corso/src/internal/tester" "github.com/alcionai/corso/src/internal/tester"
"github.com/alcionai/corso/src/internal/version" "github.com/alcionai/corso/src/internal/version"
"github.com/alcionai/corso/src/pkg/control" "github.com/alcionai/corso/src/pkg/control"
"github.com/alcionai/corso/src/pkg/count" "github.com/alcionai/corso/src/pkg/count"
"github.com/alcionai/corso/src/pkg/path" "github.com/alcionai/corso/src/pkg/path"
"github.com/alcionai/corso/src/pkg/services/m365/api" "github.com/alcionai/corso/src/pkg/services/m365/api"
apiMock "github.com/alcionai/corso/src/pkg/services/m365/api/mock"
) )
type RestoreUnitSuite struct { type RestoreUnitSuite struct {
@ -491,7 +494,7 @@ func (suite *RestoreUnitSuite) TestRestoreItem_collisionHandling() {
mndi.SetId(ptr.To(mndiID)) mndi.SetId(ptr.To(mndiID))
var ( var (
caches = NewRestoreCaches() caches = NewRestoreCaches(nil)
rh = &mock.RestoreHandler{ rh = &mock.RestoreHandler{
PostItemResp: models.NewDriveItem(), PostItemResp: models.NewDriveItem(),
DeleteItemErr: test.deleteErr, DeleteItemErr: test.deleteErr,
@ -510,21 +513,25 @@ func (suite *RestoreUnitSuite) TestRestoreItem_collisionHandling() {
ctr := count.New() ctr := count.New()
rcc := inject.RestoreConsumerConfig{
BackupVersion: version.Backup,
Options: control.DefaultOptions(),
RestoreConfig: restoreCfg,
}
_, skip, err := restoreItem( _, skip, err := restoreItem(
ctx, ctx,
rh, rh,
restoreCfg, rcc,
mock.FetchItemByName{ mock.FetchItemByName{
Item: &mock.Data{ Item: &mock.Data{
Reader: mock.FileRespReadCloser(mock.DriveFileMetaData), Reader: mock.FileRespReadCloser(mock.DriveFileMetaData),
}, },
}, },
version.Backup,
dp, dp,
"", "",
make([]byte, graph.CopyBufferSize), make([]byte, graph.CopyBufferSize),
caches, caches,
false,
&mock.Data{ &mock.Data{
ID: uuid.NewString(), ID: uuid.NewString(),
Reader: mock.FileRespReadCloser(mock.DriveFilePayloadData), Reader: mock.FileRespReadCloser(mock.DriveFilePayloadData),
@ -617,3 +624,435 @@ func (suite *RestoreUnitSuite) TestCreateFolder() {
}) })
} }
} }
type mockGRF struct {
err error
rootFolder models.DriveItemable
}
func (m *mockGRF) GetRootFolder(
context.Context,
string,
) (models.DriveItemable, error) {
return m.rootFolder, m.err
}
func (suite *RestoreUnitSuite) TestRestoreCaches_AddDrive() {
rfID := "this-is-id"
driveID := "another-id"
name := "name"
rf := models.NewDriveItem()
rf.SetId(&rfID)
md := models.NewDrive()
md.SetId(&driveID)
md.SetName(&name)
table := []struct {
name string
mock *mockGRF
expectErr require.ErrorAssertionFunc
expectID string
checkValues bool
}{
{
name: "good",
mock: &mockGRF{rootFolder: rf},
expectErr: require.NoError,
expectID: rfID,
checkValues: true,
},
{
name: "err",
mock: &mockGRF{err: assert.AnError},
expectErr: require.Error,
expectID: "",
},
}
for _, test := range table {
suite.Run(test.name, func() {
t := suite.T()
ctx, flush := tester.NewContext(t)
defer flush()
rc := NewRestoreCaches(nil)
err := rc.AddDrive(ctx, md, test.mock)
test.expectErr(t, err, clues.ToCore(err))
if test.checkValues {
idResult := rc.DriveIDToDriveInfo[driveID]
assert.Equal(t, driveID, idResult.id, "drive id")
assert.Equal(t, name, idResult.name, "drive name")
assert.Equal(t, test.expectID, idResult.rootFolderID, "root folder id")
nameResult := rc.DriveNameToDriveInfo[name]
assert.Equal(t, driveID, nameResult.id, "drive id")
assert.Equal(t, name, nameResult.name, "drive name")
assert.Equal(t, test.expectID, nameResult.rootFolderID, "root folder id")
}
})
}
}
type mockGDPARF struct {
err error
rootFolder models.DriveItemable
pager *apiMock.DrivePager
}
func (m *mockGDPARF) GetRootFolder(
context.Context,
string,
) (models.DriveItemable, error) {
return m.rootFolder, m.err
}
func (m *mockGDPARF) NewDrivePager(
string,
[]string,
) api.DrivePager {
return m.pager
}
func (suite *RestoreUnitSuite) TestRestoreCaches_Populate() {
rfID := "this-is-id"
driveID := "another-id"
name := "name"
rf := models.NewDriveItem()
rf.SetId(&rfID)
md := models.NewDrive()
md.SetId(&driveID)
md.SetName(&name)
table := []struct {
name string
mock *apiMock.DrivePager
expectErr require.ErrorAssertionFunc
expectLen int
checkValues bool
}{
{
name: "no results",
mock: &apiMock.DrivePager{
ToReturn: []apiMock.PagerResult{
{Drives: []models.Driveable{}},
},
},
expectErr: require.NoError,
expectLen: 0,
},
{
name: "one result",
mock: &apiMock.DrivePager{
ToReturn: []apiMock.PagerResult{
{Drives: []models.Driveable{md}},
},
},
expectErr: require.NoError,
expectLen: 1,
checkValues: true,
},
{
name: "error",
mock: &apiMock.DrivePager{
ToReturn: []apiMock.PagerResult{
{Err: assert.AnError},
},
},
expectErr: require.Error,
expectLen: 0,
},
}
for _, test := range table {
suite.Run(test.name, func() {
t := suite.T()
ctx, flush := tester.NewContext(t)
defer flush()
gdparf := &mockGDPARF{
rootFolder: rf,
pager: test.mock,
}
rc := NewRestoreCaches(nil)
err := rc.Populate(ctx, gdparf, "shmoo")
test.expectErr(t, err, clues.ToCore(err))
assert.Len(t, rc.DriveIDToDriveInfo, test.expectLen)
assert.Len(t, rc.DriveNameToDriveInfo, test.expectLen)
if test.checkValues {
idResult := rc.DriveIDToDriveInfo[driveID]
assert.Equal(t, driveID, idResult.id, "drive id")
assert.Equal(t, name, idResult.name, "drive name")
assert.Equal(t, rfID, idResult.rootFolderID, "root folder id")
nameResult := rc.DriveNameToDriveInfo[name]
assert.Equal(t, driveID, nameResult.id, "drive id")
assert.Equal(t, name, nameResult.name, "drive name")
assert.Equal(t, rfID, nameResult.rootFolderID, "root folder id")
}
})
}
}
type mockPDAGRF struct {
i int
postResp []models.Driveable
postErr []error
grf mockGRF
}
func (m *mockPDAGRF) PostDrive(
ctx context.Context,
protectedResourceID, driveName string,
) (models.Driveable, error) {
defer func() { m.i++ }()
md := m.postResp[m.i]
if md != nil {
md.SetName(&driveName)
}
return md, m.postErr[m.i]
}
func (m *mockPDAGRF) GetRootFolder(
ctx context.Context,
driveID string,
) (models.DriveItemable, error) {
return m.grf.rootFolder, m.grf.err
}
func (suite *RestoreUnitSuite) TestEnsureDriveExists() {
rfID := "this-is-id"
driveID := "another-id"
oldID := "old-id"
name := "name"
otherName := "other name"
rf := models.NewDriveItem()
rf.SetId(&rfID)
grf := mockGRF{rootFolder: rf}
makeMD := func() models.Driveable {
md := models.NewDrive()
md.SetId(&driveID)
md.SetName(&name)
return md
}
dp := &path.DrivePath{
DriveID: driveID,
Root: "root:",
Folders: path.Elements{},
}
oldDP := &path.DrivePath{
DriveID: oldID,
Root: "root:",
Folders: path.Elements{},
}
populatedCache := func(id string) *restoreCaches {
rc := NewRestoreCaches(nil)
di := driveInfo{
id: id,
name: name,
}
rc.DriveIDToDriveInfo[id] = di
rc.DriveNameToDriveInfo[name] = di
return rc
}
oldDriveIDNames := idname.NewCache(nil)
oldDriveIDNames.Add(oldID, name)
idSwitchedCache := func() *restoreCaches {
rc := NewRestoreCaches(oldDriveIDNames)
di := driveInfo{
id: "diff",
name: name,
}
rc.DriveIDToDriveInfo["diff"] = di
rc.DriveNameToDriveInfo[name] = di
return rc
}
table := []struct {
name string
dp *path.DrivePath
mock *mockPDAGRF
rc *restoreCaches
expectErr require.ErrorAssertionFunc
fallbackName string
expectName string
expectID string
skipValueChecks bool
}{
{
name: "drive already in cache",
dp: dp,
mock: &mockPDAGRF{
postResp: []models.Driveable{makeMD()},
postErr: []error{nil},
grf: grf,
},
rc: populatedCache(driveID),
expectErr: require.NoError,
fallbackName: name,
expectName: name,
expectID: driveID,
},
{
name: "drive with same name but different id exists",
dp: oldDP,
mock: &mockPDAGRF{
postResp: []models.Driveable{makeMD()},
postErr: []error{nil},
grf: grf,
},
rc: idSwitchedCache(),
expectErr: require.NoError,
fallbackName: otherName,
expectName: name,
expectID: "diff",
},
{
name: "drive created with old name",
dp: oldDP,
mock: &mockPDAGRF{
postResp: []models.Driveable{makeMD()},
postErr: []error{nil},
grf: grf,
},
rc: NewRestoreCaches(oldDriveIDNames),
expectErr: require.NoError,
fallbackName: otherName,
expectName: name,
expectID: driveID,
},
{
name: "drive created with fallback name",
dp: dp,
mock: &mockPDAGRF{
postResp: []models.Driveable{makeMD()},
postErr: []error{nil},
grf: grf,
},
rc: NewRestoreCaches(nil),
expectErr: require.NoError,
fallbackName: otherName,
expectName: otherName,
expectID: driveID,
},
{
name: "error creating drive",
dp: dp,
mock: &mockPDAGRF{
postResp: []models.Driveable{nil},
postErr: []error{assert.AnError},
grf: grf,
},
rc: NewRestoreCaches(nil),
expectErr: require.Error,
fallbackName: name,
expectName: "",
skipValueChecks: true,
expectID: driveID,
},
{
name: "drive name already exists",
dp: dp,
mock: &mockPDAGRF{
postResp: []models.Driveable{makeMD()},
postErr: []error{nil},
grf: grf,
},
rc: populatedCache("beaux"),
expectErr: require.NoError,
fallbackName: name,
expectName: name,
expectID: driveID,
},
{
name: "list with name already exists",
dp: dp,
mock: &mockPDAGRF{
postResp: []models.Driveable{nil, makeMD()},
postErr: []error{graph.ErrItemAlreadyExistsConflict, nil},
grf: grf,
},
rc: NewRestoreCaches(nil),
expectErr: require.NoError,
fallbackName: name,
expectName: name + " 1",
expectID: driveID,
},
{
name: "list with old name already exists",
dp: oldDP,
mock: &mockPDAGRF{
postResp: []models.Driveable{nil, makeMD()},
postErr: []error{graph.ErrItemAlreadyExistsConflict, nil},
grf: grf,
},
rc: NewRestoreCaches(oldDriveIDNames),
expectErr: require.NoError,
fallbackName: name,
expectName: name + " 1",
expectID: driveID,
},
{
name: "drive and list with name already exist",
dp: dp,
mock: &mockPDAGRF{
postResp: []models.Driveable{nil, makeMD()},
postErr: []error{graph.ErrItemAlreadyExistsConflict, nil},
grf: grf,
},
rc: populatedCache(driveID),
expectErr: require.NoError,
fallbackName: name,
expectName: name,
expectID: driveID,
},
}
for _, test := range table {
suite.Run(test.name, func() {
t := suite.T()
ctx, flush := tester.NewContext(t)
defer flush()
rc := test.rc
di, err := ensureDriveExists(
ctx,
test.mock,
rc,
test.dp,
"prID",
test.fallbackName)
test.expectErr(t, err, clues.ToCore(err))
if !test.skipValueChecks {
assert.Equal(t, test.expectName, di.name, "ensured drive has expected name")
assert.Equal(t, test.expectID, di.id, "ensured drive has expected id")
nameResult := rc.DriveNameToDriveInfo[test.expectName]
assert.Equal(t, test.expectName, nameResult.name, "found drive entry with expected name")
}
})
}
}

View File

@ -12,6 +12,7 @@ import (
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"github.com/stretchr/testify/suite" "github.com/stretchr/testify/suite"
"github.com/alcionai/corso/src/internal/common/dttm"
"github.com/alcionai/corso/src/internal/common/ptr" "github.com/alcionai/corso/src/internal/common/ptr"
"github.com/alcionai/corso/src/internal/m365/graph" "github.com/alcionai/corso/src/internal/m365/graph"
odConsts "github.com/alcionai/corso/src/internal/m365/onedrive/consts" odConsts "github.com/alcionai/corso/src/internal/m365/onedrive/consts"
@ -516,15 +517,16 @@ func testRestoreAndBackupMultipleFilesAndFoldersNoPermissions(
collectionsLatest: expected, collectionsLatest: expected,
} }
restoreCfg := control.DefaultRestoreConfig(dttm.HumanReadableDriveItem)
restoreCfg.IncludePermissions = true
runRestoreBackupTestVersions( runRestoreBackupTestVersions(
t, t,
testData, testData,
suite.Tenant(), suite.Tenant(),
[]string{suite.ResourceOwner()}, []string{suite.ResourceOwner()},
control.Options{ control.DefaultOptions(),
RestorePermissions: true, restoreCfg)
ToggleFeatures: control.Toggles{},
})
}) })
} }
} }
@ -763,15 +765,16 @@ func testPermissionsRestoreAndBackup(suite oneDriveSuite, startVersion int) {
collectionsLatest: expected, collectionsLatest: expected,
} }
restoreCfg := control.DefaultRestoreConfig(dttm.HumanReadableDriveItem)
restoreCfg.IncludePermissions = true
runRestoreBackupTestVersions( runRestoreBackupTestVersions(
t, t,
testData, testData,
suite.Tenant(), suite.Tenant(),
[]string{suite.ResourceOwner()}, []string{suite.ResourceOwner()},
control.Options{ control.DefaultOptions(),
RestorePermissions: true, restoreCfg)
ToggleFeatures: control.Toggles{},
})
}) })
} }
} }
@ -851,15 +854,16 @@ func testPermissionsBackupAndNoRestore(suite oneDriveSuite, startVersion int) {
collectionsLatest: expected, collectionsLatest: expected,
} }
restoreCfg := control.DefaultRestoreConfig(dttm.HumanReadableDriveItem)
restoreCfg.IncludePermissions = true
runRestoreBackupTestVersions( runRestoreBackupTestVersions(
t, t,
testData, testData,
suite.Tenant(), suite.Tenant(),
[]string{suite.ResourceOwner()}, []string{suite.ResourceOwner()},
control.Options{ control.DefaultOptions(),
RestorePermissions: false, restoreCfg)
ToggleFeatures: control.Toggles{},
})
}) })
} }
} }
@ -1054,15 +1058,16 @@ func testPermissionsInheritanceRestoreAndBackup(suite oneDriveSuite, startVersio
collectionsLatest: expected, collectionsLatest: expected,
} }
restoreCfg := control.DefaultRestoreConfig(dttm.HumanReadableDriveItem)
restoreCfg.IncludePermissions = true
runRestoreBackupTestVersions( runRestoreBackupTestVersions(
t, t,
testData, testData,
suite.Tenant(), suite.Tenant(),
[]string{suite.ResourceOwner()}, []string{suite.ResourceOwner()},
control.Options{ control.DefaultOptions(),
RestorePermissions: true, restoreCfg)
ToggleFeatures: control.Toggles{},
})
}) })
} }
} }
@ -1247,15 +1252,16 @@ func testLinkSharesInheritanceRestoreAndBackup(suite oneDriveSuite, startVersion
collectionsLatest: expected, collectionsLatest: expected,
} }
restoreCfg := control.DefaultRestoreConfig(dttm.HumanReadableDriveItem)
restoreCfg.IncludePermissions = true
runRestoreBackupTestVersions( runRestoreBackupTestVersions(
t, t,
testData, testData,
suite.Tenant(), suite.Tenant(),
[]string{suite.ResourceOwner()}, []string{suite.ResourceOwner()},
control.Options{ control.DefaultOptions(),
RestorePermissions: true, restoreCfg)
ToggleFeatures: control.Toggles{},
})
}) })
} }
} }
@ -1362,16 +1368,16 @@ func testRestoreFolderNamedFolderRegression(
collectionsLatest: expected, collectionsLatest: expected,
} }
restoreCfg := control.DefaultRestoreConfig(dttm.HumanReadableDriveItem)
restoreCfg.IncludePermissions = true
runRestoreTestWithVersion( runRestoreTestWithVersion(
t, t,
testData, testData,
suite.Tenant(), suite.Tenant(),
[]string{suite.ResourceOwner()}, []string{suite.ResourceOwner()},
control.Options{ control.DefaultOptions(),
RestorePermissions: true, restoreCfg)
ToggleFeatures: control.Toggles{},
},
)
}) })
} }
} }

View File

@ -12,11 +12,11 @@ import (
"github.com/alcionai/corso/src/internal/m365/onedrive" "github.com/alcionai/corso/src/internal/m365/onedrive"
"github.com/alcionai/corso/src/internal/m365/sharepoint" "github.com/alcionai/corso/src/internal/m365/sharepoint"
"github.com/alcionai/corso/src/internal/m365/support" "github.com/alcionai/corso/src/internal/m365/support"
"github.com/alcionai/corso/src/internal/operations/inject"
"github.com/alcionai/corso/src/pkg/backup/details" "github.com/alcionai/corso/src/pkg/backup/details"
"github.com/alcionai/corso/src/pkg/control"
"github.com/alcionai/corso/src/pkg/count" "github.com/alcionai/corso/src/pkg/count"
"github.com/alcionai/corso/src/pkg/fault" "github.com/alcionai/corso/src/pkg/fault"
"github.com/alcionai/corso/src/pkg/selectors" "github.com/alcionai/corso/src/pkg/path"
) )
// ConsumeRestoreCollections restores data from the specified collections // ConsumeRestoreCollections restores data from the specified collections
@ -24,10 +24,7 @@ import (
// SideEffect: status is updated at the completion of operation // SideEffect: status is updated at the completion of operation
func (ctrl *Controller) ConsumeRestoreCollections( func (ctrl *Controller) ConsumeRestoreCollections(
ctx context.Context, ctx context.Context,
backupVersion int, rcc inject.RestoreConsumerConfig,
sels selectors.Selector,
restoreCfg control.RestoreConfig,
opts control.Options,
dcs []data.RestoreCollection, dcs []data.RestoreCollection,
errs *fault.Bus, errs *fault.Bus,
ctr *count.Bus, ctr *count.Bus,
@ -35,42 +32,57 @@ func (ctrl *Controller) ConsumeRestoreCollections(
ctx, end := diagnostics.Span(ctx, "m365:restore") ctx, end := diagnostics.Span(ctx, "m365:restore")
defer end() defer end()
ctx = graph.BindRateLimiterConfig(ctx, graph.LimiterCfg{Service: sels.PathService()}) ctx = graph.BindRateLimiterConfig(ctx, graph.LimiterCfg{Service: rcc.Selector.PathService()})
ctx = clues.Add(ctx, "restore_config", restoreCfg) // TODO(rkeepers): needs PII control ctx = clues.Add(ctx, "restore_config", rcc.RestoreConfig) // TODO(rkeepers): needs PII control
if len(dcs) == 0 {
return nil, clues.New("no collections to restore")
}
serviceEnabled, _, err := checkServiceEnabled(
ctx,
ctrl.AC.Users(),
rcc.Selector.PathService(),
rcc.ProtectedResource.ID())
if err != nil {
return nil, err
}
if !serviceEnabled {
return nil, clues.Stack(graph.ErrServiceNotEnabled).WithClues(ctx)
}
var ( var (
status *support.ControllerOperationStatus service = rcc.Selector.PathService()
deets = &details.Builder{} status *support.ControllerOperationStatus
err error deets = &details.Builder{}
) )
switch sels.Service { switch service {
case selectors.ServiceExchange: case path.ExchangeService:
status, err = exchange.ConsumeRestoreCollections(ctx, ctrl.AC, restoreCfg, dcs, deets, errs, ctr) status, err = exchange.ConsumeRestoreCollections(ctx, ctrl.AC, rcc, dcs, deets, errs, ctr)
case selectors.ServiceOneDrive: case path.OneDriveService:
status, err = onedrive.ConsumeRestoreCollections( status, err = onedrive.ConsumeRestoreCollections(
ctx, ctx,
onedrive.NewRestoreHandler(ctrl.AC), onedrive.NewRestoreHandler(ctrl.AC),
backupVersion, rcc,
restoreCfg, ctrl.backupDriveIDNames,
opts,
dcs, dcs,
deets, deets,
errs, errs,
ctr) ctr)
case selectors.ServiceSharePoint: case path.SharePointService:
status, err = sharepoint.ConsumeRestoreCollections( status, err = sharepoint.ConsumeRestoreCollections(
ctx, ctx,
backupVersion, rcc,
ctrl.AC, ctrl.AC,
restoreCfg, ctrl.backupDriveIDNames,
opts,
dcs, dcs,
deets, deets,
errs, errs,
ctr) ctr)
default: default:
err = clues.Wrap(clues.New(sels.Service.String()), "service not supported") err = clues.Wrap(clues.New(service.String()), "service not supported")
} }
ctrl.incrementAwaitingMessages() ctrl.incrementAwaitingMessages()

View File

@ -107,7 +107,7 @@ func (suite *LibrariesBackupUnitSuite) TestUpdateCollections() {
tenantID, tenantID,
site, site,
nil, nil,
control.Defaults()) control.DefaultOptions())
c.CollectionMap = collMap c.CollectionMap = collMap
@ -210,7 +210,7 @@ func (suite *SharePointPagesSuite) TestCollectPages() {
ac, ac,
mock.NewProvider(siteID, siteID), mock.NewProvider(siteID, siteID),
&MockGraphService{}, &MockGraphService{},
control.Defaults(), control.DefaultOptions(),
fault.New(true)) fault.New(true))
assert.NoError(t, err, clues.ToCore(err)) assert.NoError(t, err, clues.ToCore(err))
assert.NotEmpty(t, col) assert.NotEmpty(t, col)

View File

@ -168,7 +168,7 @@ func (suite *SharePointCollectionSuite) TestCollection_Items() {
suite.ac, suite.ac,
test.category, test.category,
nil, nil,
control.Defaults()) control.DefaultOptions())
col.data <- test.getItem(t, test.itemName) col.data <- test.getItem(t, test.itemName)
readItems := []data.Stream{} readItems := []data.Stream{}

View File

@ -157,11 +157,25 @@ func (h libraryBackupHandler) IncludesDir(dir string) bool {
var _ onedrive.RestoreHandler = &libraryRestoreHandler{} var _ onedrive.RestoreHandler = &libraryRestoreHandler{}
type libraryRestoreHandler struct { type libraryRestoreHandler struct {
ac api.Drives ac api.Client
}
func (h libraryRestoreHandler) PostDrive(
ctx context.Context,
siteID, driveName string,
) (models.Driveable, error) {
return h.ac.Lists().PostDrive(ctx, siteID, driveName)
} }
func NewRestoreHandler(ac api.Client) *libraryRestoreHandler { func NewRestoreHandler(ac api.Client) *libraryRestoreHandler {
return &libraryRestoreHandler{ac.Drives()} return &libraryRestoreHandler{ac}
}
func (h libraryRestoreHandler) NewDrivePager(
resourceOwner string,
fields []string,
) api.DrivePager {
return h.ac.Drives().NewSiteDrivePager(resourceOwner, fields)
} }
func (h libraryRestoreHandler) AugmentItemInfo( func (h libraryRestoreHandler) AugmentItemInfo(
@ -177,21 +191,21 @@ func (h libraryRestoreHandler) DeleteItem(
ctx context.Context, ctx context.Context,
driveID, itemID string, driveID, itemID string,
) error { ) error {
return h.ac.DeleteItem(ctx, driveID, itemID) return h.ac.Drives().DeleteItem(ctx, driveID, itemID)
} }
func (h libraryRestoreHandler) DeleteItemPermission( func (h libraryRestoreHandler) DeleteItemPermission(
ctx context.Context, ctx context.Context,
driveID, itemID, permissionID string, driveID, itemID, permissionID string,
) error { ) error {
return h.ac.DeleteItemPermission(ctx, driveID, itemID, permissionID) return h.ac.Drives().DeleteItemPermission(ctx, driveID, itemID, permissionID)
} }
func (h libraryRestoreHandler) GetItemsInContainerByCollisionKey( func (h libraryRestoreHandler) GetItemsInContainerByCollisionKey(
ctx context.Context, ctx context.Context,
driveID, containerID string, driveID, containerID string,
) (map[string]api.DriveItemIDType, error) { ) (map[string]api.DriveItemIDType, error) {
m, err := h.ac.GetItemsInContainerByCollisionKey(ctx, driveID, containerID) m, err := h.ac.Drives().GetItemsInContainerByCollisionKey(ctx, driveID, containerID)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -203,7 +217,7 @@ func (h libraryRestoreHandler) NewItemContentUpload(
ctx context.Context, ctx context.Context,
driveID, itemID string, driveID, itemID string,
) (models.UploadSessionable, error) { ) (models.UploadSessionable, error) {
return h.ac.NewItemContentUpload(ctx, driveID, itemID) return h.ac.Drives().NewItemContentUpload(ctx, driveID, itemID)
} }
func (h libraryRestoreHandler) PostItemPermissionUpdate( func (h libraryRestoreHandler) PostItemPermissionUpdate(
@ -211,7 +225,7 @@ func (h libraryRestoreHandler) PostItemPermissionUpdate(
driveID, itemID string, driveID, itemID string,
body *drives.ItemItemsItemInvitePostRequestBody, body *drives.ItemItemsItemInvitePostRequestBody,
) (drives.ItemItemsItemInviteResponseable, error) { ) (drives.ItemItemsItemInviteResponseable, error) {
return h.ac.PostItemPermissionUpdate(ctx, driveID, itemID, body) return h.ac.Drives().PostItemPermissionUpdate(ctx, driveID, itemID, body)
} }
func (h libraryRestoreHandler) PostItemLinkShareUpdate( func (h libraryRestoreHandler) PostItemLinkShareUpdate(
@ -219,7 +233,7 @@ func (h libraryRestoreHandler) PostItemLinkShareUpdate(
driveID, itemID string, driveID, itemID string,
body *drives.ItemItemsItemCreateLinkPostRequestBody, body *drives.ItemItemsItemCreateLinkPostRequestBody,
) (models.Permissionable, error) { ) (models.Permissionable, error) {
return h.ac.PostItemLinkShareUpdate(ctx, driveID, itemID, body) return h.ac.Drives().PostItemLinkShareUpdate(ctx, driveID, itemID, body)
} }
func (h libraryRestoreHandler) PostItemInContainer( func (h libraryRestoreHandler) PostItemInContainer(
@ -228,21 +242,21 @@ func (h libraryRestoreHandler) PostItemInContainer(
newItem models.DriveItemable, newItem models.DriveItemable,
onCollision control.CollisionPolicy, onCollision control.CollisionPolicy,
) (models.DriveItemable, error) { ) (models.DriveItemable, error) {
return h.ac.PostItemInContainer(ctx, driveID, parentFolderID, newItem, onCollision) return h.ac.Drives().PostItemInContainer(ctx, driveID, parentFolderID, newItem, onCollision)
} }
func (h libraryRestoreHandler) GetFolderByName( func (h libraryRestoreHandler) GetFolderByName(
ctx context.Context, ctx context.Context,
driveID, parentFolderID, folderName string, driveID, parentFolderID, folderName string,
) (models.DriveItemable, error) { ) (models.DriveItemable, error) {
return h.ac.GetFolderByName(ctx, driveID, parentFolderID, folderName) return h.ac.Drives().GetFolderByName(ctx, driveID, parentFolderID, folderName)
} }
func (h libraryRestoreHandler) GetRootFolder( func (h libraryRestoreHandler) GetRootFolder(
ctx context.Context, ctx context.Context,
driveID string, driveID string,
) (models.DriveItemable, error) { ) (models.DriveItemable, error) {
return h.ac.GetRootFolder(ctx, driveID) return h.ac.Drives().GetRootFolder(ctx, driveID)
} }
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------

View File

@ -10,6 +10,8 @@ import (
"github.com/alcionai/clues" "github.com/alcionai/clues"
"github.com/microsoftgraph/msgraph-sdk-go/models" "github.com/microsoftgraph/msgraph-sdk-go/models"
"github.com/alcionai/corso/src/internal/common/dttm"
"github.com/alcionai/corso/src/internal/common/idname"
"github.com/alcionai/corso/src/internal/common/ptr" "github.com/alcionai/corso/src/internal/common/ptr"
"github.com/alcionai/corso/src/internal/data" "github.com/alcionai/corso/src/internal/data"
"github.com/alcionai/corso/src/internal/diagnostics" "github.com/alcionai/corso/src/internal/diagnostics"
@ -17,6 +19,7 @@ import (
"github.com/alcionai/corso/src/internal/m365/onedrive" "github.com/alcionai/corso/src/internal/m365/onedrive"
betaAPI "github.com/alcionai/corso/src/internal/m365/sharepoint/api" betaAPI "github.com/alcionai/corso/src/internal/m365/sharepoint/api"
"github.com/alcionai/corso/src/internal/m365/support" "github.com/alcionai/corso/src/internal/m365/support"
"github.com/alcionai/corso/src/internal/operations/inject"
"github.com/alcionai/corso/src/pkg/backup/details" "github.com/alcionai/corso/src/pkg/backup/details"
"github.com/alcionai/corso/src/pkg/control" "github.com/alcionai/corso/src/pkg/control"
"github.com/alcionai/corso/src/pkg/count" "github.com/alcionai/corso/src/pkg/count"
@ -29,21 +32,26 @@ import (
// ConsumeRestoreCollections will restore the specified data collections into OneDrive // ConsumeRestoreCollections will restore the specified data collections into OneDrive
func ConsumeRestoreCollections( func ConsumeRestoreCollections(
ctx context.Context, ctx context.Context,
backupVersion int, rcc inject.RestoreConsumerConfig,
ac api.Client, ac api.Client,
restoreCfg control.RestoreConfig, backupDriveIDNames idname.Cacher,
opts control.Options,
dcs []data.RestoreCollection, dcs []data.RestoreCollection,
deets *details.Builder, deets *details.Builder,
errs *fault.Bus, errs *fault.Bus,
ctr *count.Bus, ctr *count.Bus,
) (*support.ControllerOperationStatus, error) { ) (*support.ControllerOperationStatus, error) {
var ( var (
lrh = libraryRestoreHandler{ac}
restoreMetrics support.CollectionMetrics restoreMetrics support.CollectionMetrics
caches = onedrive.NewRestoreCaches() caches = onedrive.NewRestoreCaches(backupDriveIDNames)
el = errs.Local() el = errs.Local()
) )
err := caches.Populate(ctx, lrh, rcc.ProtectedResource.ID())
if err != nil {
return nil, clues.Wrap(err, "initializing restore caches")
}
// Reorder collections so that the parents directories are created // Reorder collections so that the parents directories are created
// before the child directories; a requirement for permissions. // before the child directories; a requirement for permissions.
data.SortRestoreCollections(dcs) data.SortRestoreCollections(dcs)
@ -60,7 +68,7 @@ func ConsumeRestoreCollections(
metrics support.CollectionMetrics metrics support.CollectionMetrics
ictx = clues.Add(ctx, ictx = clues.Add(ctx,
"category", category, "category", category,
"restore_location", restoreCfg.Location, "restore_location", clues.Hide(rcc.RestoreConfig.Location),
"resource_owner", clues.Hide(dc.FullPath().ResourceOwner()), "resource_owner", clues.Hide(dc.FullPath().ResourceOwner()),
"full_path", dc.FullPath()) "full_path", dc.FullPath())
) )
@ -69,13 +77,12 @@ func ConsumeRestoreCollections(
case path.LibrariesCategory: case path.LibrariesCategory:
metrics, err = onedrive.RestoreCollection( metrics, err = onedrive.RestoreCollection(
ictx, ictx,
libraryRestoreHandler{ac.Drives()}, lrh,
restoreCfg, rcc,
backupVersion,
dc, dc,
caches, caches,
deets, deets,
opts.RestorePermissions, control.DefaultRestoreContainerName(dttm.HumanReadableDriveItem),
errs, errs,
ctr) ctr)
@ -84,7 +91,7 @@ func ConsumeRestoreCollections(
ictx, ictx,
ac.Stable, ac.Stable,
dc, dc,
restoreCfg.Location, rcc.RestoreConfig.Location,
deets, deets,
errs) errs)
@ -93,7 +100,7 @@ func ConsumeRestoreCollections(
ictx, ictx,
ac.Stable, ac.Stable,
dc, dc,
restoreCfg.Location, rcc.RestoreConfig.Location,
deets, deets,
errs) errs)
@ -117,7 +124,7 @@ func ConsumeRestoreCollections(
support.Restore, support.Restore,
len(dcs), len(dcs),
restoreMetrics, restoreMetrics,
restoreCfg.Location) rcc.RestoreConfig.Location)
return status, el.Failure() return status, el.Failure()
} }

View File

@ -360,7 +360,7 @@ func (suite *BackupOpUnitSuite) TestBackupOperation_PersistResults() {
op, err := NewBackupOperation( op, err := NewBackupOperation(
ctx, ctx,
control.Defaults(), control.DefaultOptions(),
kw, kw,
sw, sw,
ctrl, ctrl,
@ -1241,7 +1241,7 @@ func (suite *BackupOpIntegrationSuite) TestNewBackupOperation() {
sw = &store.Wrapper{} sw = &store.Wrapper{}
ctrl = &mock.Controller{} ctrl = &mock.Controller{}
acct = tconfig.NewM365Account(suite.T()) acct = tconfig.NewM365Account(suite.T())
opts = control.Defaults() opts = control.DefaultOptions()
) )
table := []struct { table := []struct {

View File

@ -27,7 +27,7 @@ func ControllerWithSelector(
ins idname.Cacher, ins idname.Cacher,
onFail func(), onFail func(),
) (*m365.Controller, selectors.Selector) { ) (*m365.Controller, selectors.Selector) {
ctrl, err := m365.NewController(ctx, acct, cr, sel.PathService(), control.Defaults()) ctrl, err := m365.NewController(ctx, acct, cr, sel.PathService(), control.DefaultOptions())
if !assert.NoError(t, err, clues.ToCore(err)) { if !assert.NoError(t, err, clues.ToCore(err)) {
if onFail != nil { if onFail != nil {
onFail() onFail()
@ -36,7 +36,7 @@ func ControllerWithSelector(
t.FailNow() t.FailNow()
} }
id, name, err := ctrl.PopulateOwnerIDAndNamesFrom(ctx, sel.DiscreteOwner, ins) id, name, err := ctrl.PopulateProtectedResourceIDAndName(ctx, sel.DiscreteOwner, ins)
if !assert.NoError(t, err, clues.ToCore(err)) { if !assert.NoError(t, err, clues.ToCore(err)) {
if onFail != nil { if onFail != nil {
onFail() onFail()

View File

@ -0,0 +1,18 @@
package inject
import (
"github.com/alcionai/corso/src/internal/common/idname"
"github.com/alcionai/corso/src/pkg/control"
"github.com/alcionai/corso/src/pkg/selectors"
)
// RestoreConsumerConfig container-of-things for holding options and
// configurations from various packages, which are widely used by all
// restore consumers independent of service or data category.
type RestoreConsumerConfig struct {
BackupVersion int
Options control.Options
ProtectedResource idname.Provider
RestoreConfig control.RestoreConfig
Selector selectors.Selector
}

View File

@ -36,16 +36,44 @@ type (
RestoreConsumer interface { RestoreConsumer interface {
ConsumeRestoreCollections( ConsumeRestoreCollections(
ctx context.Context, ctx context.Context,
backupVersion int, rcc RestoreConsumerConfig,
selector selectors.Selector,
restoreCfg control.RestoreConfig,
opts control.Options,
dcs []data.RestoreCollection, dcs []data.RestoreCollection,
errs *fault.Bus, errs *fault.Bus,
ctr *count.Bus, ctr *count.Bus,
) (*details.Details, error) ) (*details.Details, error)
Wait() *data.CollectionStats Wait() *data.CollectionStats
CacheItemInfoer
PopulateProtectedResourceIDAndNamer
}
CacheItemInfoer interface {
// CacheItemInfo is used by the consumer to cache metadata that is
// sourced from per-item info, but may be valuable to the restore at
// large.
// Ex: pairing drive ids with drive names as they appeared at the time
// of backup.
CacheItemInfo(v details.ItemInfo)
}
PopulateProtectedResourceIDAndNamer interface {
// PopulateProtectedResourceIDAndName takes the provided owner identifier and produces
// the owner's name and ID from that value. Returns an error if the owner is
// not recognized by the current tenant.
//
// The id-name swapper should be optional. Some processes will look up all owners in
// the tenant before reaching this step. In that case, the data gets handed
// down for this func to consume instead of performing further queries. The
// data gets stored inside the controller instance for later re-use.
PopulateProtectedResourceIDAndName(
ctx context.Context,
owner string, // input value, can be either id or name
ins idname.Cacher,
) (
id, name string,
err error,
)
} }
RepoMaintenancer interface { RepoMaintenancer interface {

View File

@ -54,7 +54,7 @@ func (suite *MaintenanceOpIntegrationSuite) TestRepoMaintenance() {
mo, err := NewMaintenanceOperation( mo, err := NewMaintenanceOperation(
ctx, ctx,
control.Defaults(), control.DefaultOptions(),
kw, kw,
repository.Maintenance{ repository.Maintenance{
Type: repository.MetadataMaintenance, Type: repository.MetadataMaintenance,

View File

@ -26,7 +26,7 @@ func TestOperationSuite(t *testing.T) {
func (suite *OperationSuite) TestNewOperation() { func (suite *OperationSuite) TestNewOperation() {
t := suite.T() t := suite.T()
op := newOperation(control.Defaults(), events.Bus{}, &count.Bus{}, nil, nil) op := newOperation(control.DefaultOptions(), events.Bus{}, &count.Bus{}, nil, nil)
assert.Greater(t, op.CreatedAt, time.Time{}) assert.Greater(t, op.CreatedAt, time.Time{})
} }
@ -46,7 +46,7 @@ func (suite *OperationSuite) TestOperation_Validate() {
} }
for _, test := range table { for _, test := range table {
suite.Run(test.name, func() { suite.Run(test.name, func() {
err := newOperation(control.Defaults(), events.Bus{}, &count.Bus{}, test.kw, test.sw).validate() err := newOperation(control.DefaultOptions(), events.Bus{}, &count.Bus{}, test.kw, test.sw).validate()
test.errCheck(suite.T(), err, clues.ToCore(err)) test.errCheck(suite.T(), err, clues.ToCore(err))
}) })
} }

View File

@ -11,6 +11,7 @@ import (
"github.com/alcionai/corso/src/internal/common/crash" "github.com/alcionai/corso/src/internal/common/crash"
"github.com/alcionai/corso/src/internal/common/dttm" "github.com/alcionai/corso/src/internal/common/dttm"
"github.com/alcionai/corso/src/internal/common/idname"
"github.com/alcionai/corso/src/internal/data" "github.com/alcionai/corso/src/internal/data"
"github.com/alcionai/corso/src/internal/diagnostics" "github.com/alcionai/corso/src/internal/diagnostics"
"github.com/alcionai/corso/src/internal/events" "github.com/alcionai/corso/src/internal/events"
@ -172,7 +173,7 @@ func (op *RestoreOperation) Run(ctx context.Context) (restoreDetails *details.De
logger.CtxErr(ctx, err).Error("running restore") logger.CtxErr(ctx, err).Error("running restore")
if errors.Is(err, kopia.ErrNoRestorePath) { if errors.Is(err, kopia.ErrNoRestorePath) {
op.Errors.Fail(clues.New("empty backup or unknown path provided")) op.Errors.Fail(clues.Wrap(err, "empty backup or unknown path provided"))
} }
op.Errors.Fail(clues.Wrap(err, "running restore")) op.Errors.Fail(clues.Wrap(err, "running restore"))
@ -217,17 +218,33 @@ func (op *RestoreOperation) do(
return nil, clues.Wrap(err, "getting backup and details") return nil, clues.Wrap(err, "getting backup and details")
} }
observe.Message(ctx, "Restoring", observe.Bullet, clues.Hide(bup.Selector.DiscreteOwner)) restoreToProtectedResource, err := chooseRestoreResource(ctx, op.rc, op.RestoreCfg, bup.Selector)
if err != nil {
return nil, clues.Wrap(err, "getting destination protected resource")
}
paths, err := formatDetailsForRestoration(ctx, bup.Version, op.Selectors, deets, op.Errors) ctx = clues.Add(
ctx,
"backup_protected_resource_id", bup.Selector.ID(),
"backup_protected_resource_name", clues.Hide(bup.Selector.Name()),
"restore_protected_resource_id", restoreToProtectedResource.ID(),
"restore_protected_resource_name", clues.Hide(restoreToProtectedResource.Name()))
observe.Message(ctx, "Restoring", observe.Bullet, clues.Hide(restoreToProtectedResource.Name()))
paths, err := formatDetailsForRestoration(
ctx,
bup.Version,
op.Selectors,
deets,
op.rc,
op.Errors)
if err != nil { if err != nil {
return nil, clues.Wrap(err, "formatting paths from details") return nil, clues.Wrap(err, "formatting paths from details")
} }
ctx = clues.Add( ctx = clues.Add(
ctx, ctx,
"resource_owner_id", bup.Selector.ID(),
"resource_owner_name", clues.Hide(bup.Selector.Name()),
"details_entries", len(deets.Entries), "details_entries", len(deets.Entries),
"details_paths", len(paths), "details_paths", len(paths),
"backup_snapshot_id", bup.SnapshotID, "backup_snapshot_id", bup.SnapshotID,
@ -248,7 +265,12 @@ func (op *RestoreOperation) do(
kopiaComplete := observe.MessageWithCompletion(ctx, "Enumerating items in repository") kopiaComplete := observe.MessageWithCompletion(ctx, "Enumerating items in repository")
defer close(kopiaComplete) defer close(kopiaComplete)
dcs, err := op.kopia.ProduceRestoreCollections(ctx, bup.SnapshotID, paths, opStats.bytesRead, op.Errors) dcs, err := op.kopia.ProduceRestoreCollections(
ctx,
bup.SnapshotID,
paths,
opStats.bytesRead,
op.Errors)
if err != nil { if err != nil {
return nil, clues.Wrap(err, "producing collections to restore") return nil, clues.Wrap(err, "producing collections to restore")
} }
@ -265,6 +287,7 @@ func (op *RestoreOperation) do(
ctx, ctx,
op.rc, op.rc,
bup.Version, bup.Version,
restoreToProtectedResource,
op.Selectors, op.Selectors,
op.RestoreCfg, op.RestoreCfg,
op.Options, op.Options,
@ -315,6 +338,24 @@ func (op *RestoreOperation) persistResults(
return op.Errors.Failure() return op.Errors.Failure()
} }
func chooseRestoreResource(
ctx context.Context,
pprian inject.PopulateProtectedResourceIDAndNamer,
restoreCfg control.RestoreConfig,
orig idname.Provider,
) (idname.Provider, error) {
if len(restoreCfg.ProtectedResource) == 0 {
return orig, nil
}
id, name, err := pprian.PopulateProtectedResourceIDAndName(
ctx,
restoreCfg.ProtectedResource,
nil)
return idname.NewProvider(id, name), clues.Stack(err).OrNil()
}
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
// Restorer funcs // Restorer funcs
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
@ -323,6 +364,7 @@ func consumeRestoreCollections(
ctx context.Context, ctx context.Context,
rc inject.RestoreConsumer, rc inject.RestoreConsumer,
backupVersion int, backupVersion int,
toProtectedResoruce idname.Provider,
sel selectors.Selector, sel selectors.Selector,
restoreCfg control.RestoreConfig, restoreCfg control.RestoreConfig,
opts control.Options, opts control.Options,
@ -336,15 +378,15 @@ func consumeRestoreCollections(
close(complete) close(complete)
}() }()
deets, err := rc.ConsumeRestoreCollections( rcc := inject.RestoreConsumerConfig{
ctx, BackupVersion: backupVersion,
backupVersion, Options: opts,
sel, ProtectedResource: toProtectedResoruce,
restoreCfg, RestoreConfig: restoreCfg,
opts, Selector: sel,
dcs, }
errs,
ctr) deets, err := rc.ConsumeRestoreCollections(ctx, rcc, dcs, errs, ctr)
if err != nil { if err != nil {
return nil, clues.Wrap(err, "restoring collections") return nil, clues.Wrap(err, "restoring collections")
} }
@ -359,6 +401,7 @@ func formatDetailsForRestoration(
backupVersion int, backupVersion int,
sel selectors.Selector, sel selectors.Selector,
deets *details.Details, deets *details.Details,
cii inject.CacheItemInfoer,
errs *fault.Bus, errs *fault.Bus,
) ([]path.RestorePaths, error) { ) ([]path.RestorePaths, error) {
fds, err := sel.Reduce(ctx, deets, errs) fds, err := sel.Reduce(ctx, deets, errs)
@ -366,6 +409,11 @@ func formatDetailsForRestoration(
return nil, err return nil, err
} }
// allow restore controllers to iterate over item metadata
for _, ent := range fds.Entries {
cii.CacheItemInfo(ent.ItemInfo)
}
paths, err := pathtransformer.GetPaths(ctx, backupVersion, fds.Items(), errs) paths, err := pathtransformer.GetPaths(ctx, backupVersion, fds.Items(), errs)
if err != nil { if err != nil {
return nil, clues.Wrap(err, "getting restore paths") return nil, clues.Wrap(err, "getting restore paths")

View File

@ -11,6 +11,7 @@ import (
"github.com/stretchr/testify/suite" "github.com/stretchr/testify/suite"
"github.com/alcionai/corso/src/internal/common/dttm" "github.com/alcionai/corso/src/internal/common/dttm"
"github.com/alcionai/corso/src/internal/common/idname"
inMock "github.com/alcionai/corso/src/internal/common/idname/mock" inMock "github.com/alcionai/corso/src/internal/common/idname/mock"
"github.com/alcionai/corso/src/internal/data" "github.com/alcionai/corso/src/internal/data"
"github.com/alcionai/corso/src/internal/events" "github.com/alcionai/corso/src/internal/events"
@ -41,15 +42,15 @@ import (
// unit // unit
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
type RestoreOpSuite struct { type RestoreOpUnitSuite struct {
tester.Suite tester.Suite
} }
func TestRestoreOpSuite(t *testing.T) { func TestRestoreOpUnitSuite(t *testing.T) {
suite.Run(t, &RestoreOpSuite{Suite: tester.NewUnitSuite(t)}) suite.Run(t, &RestoreOpUnitSuite{Suite: tester.NewUnitSuite(t)})
} }
func (suite *RestoreOpSuite) TestRestoreOperation_PersistResults() { func (suite *RestoreOpUnitSuite) TestRestoreOperation_PersistResults() {
var ( var (
kw = &kopia.Wrapper{} kw = &kopia.Wrapper{}
sw = &store.Wrapper{} sw = &store.Wrapper{}
@ -111,7 +112,7 @@ func (suite *RestoreOpSuite) TestRestoreOperation_PersistResults() {
op, err := NewRestoreOperation( op, err := NewRestoreOperation(
ctx, ctx,
control.Defaults(), control.DefaultOptions(),
kw, kw,
sw, sw,
ctrl, ctrl,
@ -139,6 +140,75 @@ func (suite *RestoreOpSuite) TestRestoreOperation_PersistResults() {
} }
} }
func (suite *RestoreOpUnitSuite) TestChooseRestoreResource() {
var (
id = "id"
name = "name"
cfgWithPR = control.DefaultRestoreConfig(dttm.HumanReadable)
)
cfgWithPR.ProtectedResource = "cfgid"
table := []struct {
name string
cfg control.RestoreConfig
ctrl *mock.Controller
orig idname.Provider
expectErr assert.ErrorAssertionFunc
expectProvider assert.ValueAssertionFunc
expectID string
expectName string
}{
{
name: "use original",
cfg: control.DefaultRestoreConfig(dttm.HumanReadable),
ctrl: &mock.Controller{
ProtectedResourceID: id,
ProtectedResourceName: name,
},
orig: idname.NewProvider("oid", "oname"),
expectErr: assert.NoError,
expectID: "oid",
expectName: "oname",
},
{
name: "look up resource with iface",
cfg: cfgWithPR,
ctrl: &mock.Controller{
ProtectedResourceID: id,
ProtectedResourceName: name,
},
orig: idname.NewProvider("oid", "oname"),
expectErr: assert.NoError,
expectID: id,
expectName: name,
},
{
name: "error looking up protected resource",
cfg: cfgWithPR,
ctrl: &mock.Controller{
ProtectedResourceErr: assert.AnError,
},
orig: idname.NewProvider("oid", "oname"),
expectErr: assert.Error,
},
}
for _, test := range table {
suite.Run(test.name, func() {
t := suite.T()
ctx, flush := tester.NewContext(t)
defer flush()
result, err := chooseRestoreResource(ctx, test.ctrl, test.cfg, test.orig)
test.expectErr(t, err, clues.ToCore(err))
require.NotNil(t, result)
assert.Equal(t, test.expectID, result.ID())
assert.Equal(t, test.expectName, result.Name())
})
}
}
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
// integration // integration
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
@ -227,7 +297,7 @@ func (suite *RestoreOpIntegrationSuite) TestNewRestoreOperation() {
sw = &store.Wrapper{} sw = &store.Wrapper{}
ctrl = &mock.Controller{} ctrl = &mock.Controller{}
restoreCfg = testdata.DefaultRestoreConfig("") restoreCfg = testdata.DefaultRestoreConfig("")
opts = control.Defaults() opts = control.DefaultOptions()
) )
table := []struct { table := []struct {
@ -292,7 +362,7 @@ func setupExchangeBackup(
bo, err := NewBackupOperation( bo, err := NewBackupOperation(
ctx, ctx,
control.Defaults(), control.DefaultOptions(),
kw, kw,
sw, sw,
ctrl, ctrl,
@ -343,7 +413,7 @@ func setupSharePointBackup(
bo, err := NewBackupOperation( bo, err := NewBackupOperation(
ctx, ctx,
control.Defaults(), control.DefaultOptions(),
kw, kw,
sw, sw,
ctrl, ctrl,
@ -372,87 +442,6 @@ func setupSharePointBackup(
} }
} }
func (suite *RestoreOpIntegrationSuite) TestRestore_Run() {
tables := []struct {
name string
owner string
restoreCfg control.RestoreConfig
getSelector func(t *testing.T, owners []string) selectors.Selector
setup func(t *testing.T, kw *kopia.Wrapper, sw *store.Wrapper, acct account.Account, owner string) bupResults
}{
{
name: "Exchange_Restore",
owner: tconfig.M365UserID(suite.T()),
restoreCfg: testdata.DefaultRestoreConfig(""),
getSelector: func(t *testing.T, owners []string) selectors.Selector {
rsel := selectors.NewExchangeRestore(owners)
rsel.Include(rsel.AllData())
return rsel.Selector
},
setup: setupExchangeBackup,
},
{
name: "SharePoint_Restore",
owner: tconfig.M365SiteID(suite.T()),
restoreCfg: control.DefaultRestoreConfig(dttm.SafeForTesting),
getSelector: func(t *testing.T, owners []string) selectors.Selector {
rsel := selectors.NewSharePointRestore(owners)
rsel.Include(rsel.Library(tconfig.LibraryDocuments), rsel.Library(tconfig.LibraryMoreDocuments))
return rsel.Selector
},
setup: setupSharePointBackup,
},
}
for _, test := range tables {
suite.Run(test.name, func() {
var (
t = suite.T()
mb = evmock.NewBus()
bup = test.setup(t, suite.kw, suite.sw, suite.acct, test.owner)
)
ctx, flush := tester.NewContext(t)
defer flush()
require.NotZero(t, bup.items)
require.NotEmpty(t, bup.backupID)
ro, err := NewRestoreOperation(
ctx,
control.Options{FailureHandling: control.FailFast},
suite.kw,
suite.sw,
bup.ctrl,
tconfig.NewM365Account(t),
bup.backupID,
test.getSelector(t, bup.selectorResourceOwners),
test.restoreCfg,
mb,
count.New())
require.NoError(t, err, clues.ToCore(err))
ds, err := ro.Run(ctx)
require.NoError(t, err, "restoreOp.Run() %+v", clues.ToCore(err))
require.NotEmpty(t, ro.Results, "restoreOp results")
require.NotNil(t, ds, "restored details")
assert.Equal(t, ro.Status, Completed, "restoreOp status")
assert.Equal(t, ro.Results.ItemsWritten, len(ds.Items()), "item write count matches len details")
assert.Less(t, 0, ro.Results.ItemsRead, "restore items read")
assert.Less(t, int64(0), ro.Results.BytesRead, "bytes read")
assert.Equal(t, 1, ro.Results.ResourceOwners, "resource Owners")
assert.NoError(t, ro.Errors.Failure(), "non-recoverable error", clues.ToCore(ro.Errors.Failure()))
assert.Empty(t, ro.Errors.Recovered(), "recoverable errors")
assert.Equal(t, bup.items, ro.Results.ItemsWritten, "backup and restore wrote the same num of items")
assert.Equal(t, 1, mb.TimesCalled[events.RestoreStart], "restore-start events")
assert.Equal(t, 1, mb.TimesCalled[events.RestoreEnd], "restore-end events")
})
}
}
func (suite *RestoreOpIntegrationSuite) TestRestore_Run_errorNoBackup() { func (suite *RestoreOpIntegrationSuite) TestRestore_Run_errorNoBackup() {
t := suite.T() t := suite.T()
@ -472,12 +461,12 @@ func (suite *RestoreOpIntegrationSuite) TestRestore_Run_errorNoBackup() {
suite.acct, suite.acct,
resource.Users, resource.Users,
rsel.PathService(), rsel.PathService(),
control.Defaults()) control.DefaultOptions())
require.NoError(t, err, clues.ToCore(err)) require.NoError(t, err, clues.ToCore(err))
ro, err := NewRestoreOperation( ro, err := NewRestoreOperation(
ctx, ctx,
control.Defaults(), control.DefaultOptions(),
suite.kw, suite.kw,
suite.sw, suite.sw,
ctrl, ctrl,

File diff suppressed because it is too large Load Diff

View File

@ -25,6 +25,7 @@ import (
"github.com/alcionai/corso/src/internal/m365/resource" "github.com/alcionai/corso/src/internal/m365/resource"
"github.com/alcionai/corso/src/internal/model" "github.com/alcionai/corso/src/internal/model"
"github.com/alcionai/corso/src/internal/operations" "github.com/alcionai/corso/src/internal/operations"
"github.com/alcionai/corso/src/internal/operations/inject"
"github.com/alcionai/corso/src/internal/streamstore" "github.com/alcionai/corso/src/internal/streamstore"
"github.com/alcionai/corso/src/internal/tester" "github.com/alcionai/corso/src/internal/tester"
"github.com/alcionai/corso/src/internal/tester/tconfig" "github.com/alcionai/corso/src/internal/tester/tconfig"
@ -406,6 +407,7 @@ func generateContainerOfItems(
restoreCfg := control.DefaultRestoreConfig(dttm.SafeForTesting) restoreCfg := control.DefaultRestoreConfig(dttm.SafeForTesting)
restoreCfg.Location = destFldr restoreCfg.Location = destFldr
restoreCfg.IncludePermissions = true
dataColls := buildCollections( dataColls := buildCollections(
t, t,
@ -414,15 +416,19 @@ func generateContainerOfItems(
restoreCfg, restoreCfg,
collections) collections)
opts := control.Defaults() opts := control.DefaultOptions()
opts.RestorePermissions = true
rcc := inject.RestoreConsumerConfig{
BackupVersion: backupVersion,
Options: opts,
ProtectedResource: sel,
RestoreConfig: restoreCfg,
Selector: sel,
}
deets, err := ctrl.ConsumeRestoreCollections( deets, err := ctrl.ConsumeRestoreCollections(
ctx, ctx,
backupVersion, rcc,
sel,
restoreCfg,
opts,
dataColls, dataColls,
fault.New(true), fault.New(true),
count.New()) count.New())
@ -541,7 +547,7 @@ func ControllerWithSelector(
ins idname.Cacher, ins idname.Cacher,
onFail func(*testing.T, context.Context), onFail func(*testing.T, context.Context),
) (*m365.Controller, selectors.Selector) { ) (*m365.Controller, selectors.Selector) {
ctrl, err := m365.NewController(ctx, acct, cr, sel.PathService(), control.Defaults()) ctrl, err := m365.NewController(ctx, acct, cr, sel.PathService(), control.DefaultOptions())
if !assert.NoError(t, err, clues.ToCore(err)) { if !assert.NoError(t, err, clues.ToCore(err)) {
if onFail != nil { if onFail != nil {
onFail(t, ctx) onFail(t, ctx)
@ -550,7 +556,7 @@ func ControllerWithSelector(
t.FailNow() t.FailNow()
} }
id, name, err := ctrl.PopulateOwnerIDAndNamesFrom(ctx, sel.DiscreteOwner, ins) id, name, err := ctrl.PopulateProtectedResourceIDAndName(ctx, sel.DiscreteOwner, ins)
if !assert.NoError(t, err, clues.ToCore(err)) { if !assert.NoError(t, err, clues.ToCore(err)) {
if onFail != nil { if onFail != nil {
onFail(t, ctx) onFail(t, ctx)
@ -568,15 +574,19 @@ func ControllerWithSelector(
// Suite Setup // Suite Setup
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
type ids struct {
ID string
DriveID string
DriveRootFolderID string
}
type intgTesterSetup struct { type intgTesterSetup struct {
ac api.Client ac api.Client
gockAC api.Client gockAC api.Client
userID string user ids
userDriveID string secondaryUser ids
userDriveRootFolderID string site ids
siteID string secondarySite ids
siteDriveID string
siteDriveRootFolderID string
} }
func newIntegrationTesterSetup(t *testing.T) intgTesterSetup { func newIntegrationTesterSetup(t *testing.T) intgTesterSetup {
@ -597,37 +607,52 @@ func newIntegrationTesterSetup(t *testing.T) intgTesterSetup {
its.gockAC, err = mock.NewClient(creds) its.gockAC, err = mock.NewClient(creds)
require.NoError(t, err, clues.ToCore(err)) require.NoError(t, err, clues.ToCore(err))
// user drive its.user = userIDs(t, tconfig.M365UserID(t), its.ac)
its.secondaryUser = userIDs(t, tconfig.SecondaryM365UserID(t), its.ac)
its.userID = tconfig.M365UserID(t) its.site = siteIDs(t, tconfig.M365SiteID(t), its.ac)
its.secondarySite = siteIDs(t, tconfig.SecondaryM365SiteID(t), its.ac)
userDrive, err := its.ac.Users().GetDefaultDrive(ctx, its.userID)
require.NoError(t, err, clues.ToCore(err))
its.userDriveID = ptr.Val(userDrive.GetId())
userDriveRootFolder, err := its.ac.Drives().GetRootFolder(ctx, its.userDriveID)
require.NoError(t, err, clues.ToCore(err))
its.userDriveRootFolderID = ptr.Val(userDriveRootFolder.GetId())
its.siteID = tconfig.M365SiteID(t)
// site
siteDrive, err := its.ac.Sites().GetDefaultDrive(ctx, its.siteID)
require.NoError(t, err, clues.ToCore(err))
its.siteDriveID = ptr.Val(siteDrive.GetId())
siteDriveRootFolder, err := its.ac.Drives().GetRootFolder(ctx, its.siteDriveID)
require.NoError(t, err, clues.ToCore(err))
its.siteDriveRootFolderID = ptr.Val(siteDriveRootFolder.GetId())
return its return its
} }
func userIDs(t *testing.T, id string, ac api.Client) ids {
ctx, flush := tester.NewContext(t)
defer flush()
r := ids{ID: id}
drive, err := ac.Users().GetDefaultDrive(ctx, id)
require.NoError(t, err, clues.ToCore(err))
r.DriveID = ptr.Val(drive.GetId())
driveRootFolder, err := ac.Drives().GetRootFolder(ctx, r.DriveID)
require.NoError(t, err, clues.ToCore(err))
r.DriveRootFolderID = ptr.Val(driveRootFolder.GetId())
return r
}
func siteIDs(t *testing.T, id string, ac api.Client) ids {
ctx, flush := tester.NewContext(t)
defer flush()
r := ids{ID: id}
drive, err := ac.Sites().GetDefaultDrive(ctx, id)
require.NoError(t, err, clues.ToCore(err))
r.DriveID = ptr.Val(drive.GetId())
driveRootFolder, err := ac.Drives().GetRootFolder(ctx, r.DriveID)
require.NoError(t, err, clues.ToCore(err))
r.DriveRootFolderID = ptr.Val(driveRootFolder.GetId())
return r
}
func getTestExtensionFactories() []extensions.CreateItemExtensioner { func getTestExtensionFactories() []extensions.CreateItemExtensioner {
return []extensions.CreateItemExtensioner{ return []extensions.CreateItemExtensioner{
&extensions.MockItemExtensionFactory{}, &extensions.MockItemExtensionFactory{},

View File

@ -72,7 +72,7 @@ func (suite *OneDriveBackupIntgSuite) TestBackup_Run_oneDrive() {
osel = selectors.NewOneDriveBackup([]string{userID}) osel = selectors.NewOneDriveBackup([]string{userID})
ws = deeTD.DriveIDFromRepoRef ws = deeTD.DriveIDFromRepoRef
svc = path.OneDriveService svc = path.OneDriveService
opts = control.Defaults() opts = control.DefaultOptions()
) )
osel.Include(selTD.OneDriveBackupFolderScope(osel)) osel.Include(selTD.OneDriveBackupFolderScope(osel))
@ -106,7 +106,7 @@ func (suite *OneDriveBackupIntgSuite) TestBackup_Run_oneDrive() {
} }
func (suite *OneDriveBackupIntgSuite) TestBackup_Run_incrementalOneDrive() { func (suite *OneDriveBackupIntgSuite) TestBackup_Run_incrementalOneDrive() {
sel := selectors.NewOneDriveRestore([]string{suite.its.userID}) sel := selectors.NewOneDriveRestore([]string{suite.its.user.ID})
ic := func(cs []string) selectors.Selector { ic := func(cs []string) selectors.Selector {
sel.Include(sel.Folders(cs, selectors.PrefixMatch())) sel.Include(sel.Folders(cs, selectors.PrefixMatch()))
@ -117,10 +117,10 @@ func (suite *OneDriveBackupIntgSuite) TestBackup_Run_incrementalOneDrive() {
t *testing.T, t *testing.T,
ctx context.Context, ctx context.Context,
) string { ) string {
d, err := suite.its.ac.Users().GetDefaultDrive(ctx, suite.its.userID) d, err := suite.its.ac.Users().GetDefaultDrive(ctx, suite.its.user.ID)
if err != nil { if err != nil {
err = graph.Wrap(ctx, err, "retrieving default user drive"). err = graph.Wrap(ctx, err, "retrieving default user drive").
With("user", suite.its.userID) With("user", suite.its.user.ID)
} }
require.NoError(t, err, clues.ToCore(err)) require.NoError(t, err, clues.ToCore(err))
@ -137,8 +137,8 @@ func (suite *OneDriveBackupIntgSuite) TestBackup_Run_incrementalOneDrive() {
runDriveIncrementalTest( runDriveIncrementalTest(
suite, suite,
suite.its.userID, suite.its.user.ID,
suite.its.userID, suite.its.user.ID,
resource.Users, resource.Users,
path.OneDriveService, path.OneDriveService,
path.FilesCategory, path.FilesCategory,
@ -166,7 +166,7 @@ func runDriveIncrementalTest(
var ( var (
acct = tconfig.NewM365Account(t) acct = tconfig.NewM365Account(t)
opts = control.Defaults() opts = control.DefaultOptions()
mb = evmock.NewBus() mb = evmock.NewBus()
ws = deeTD.DriveIDFromRepoRef ws = deeTD.DriveIDFromRepoRef
@ -683,7 +683,7 @@ func runDriveIncrementalTest(
} }
for _, test := range table { for _, test := range table {
suite.Run(test.name, func() { suite.Run(test.name, func() {
cleanCtrl, err := m365.NewController(ctx, acct, rc, sel.PathService(), control.Defaults()) cleanCtrl, err := m365.NewController(ctx, acct, rc, sel.PathService(), control.DefaultOptions())
require.NoError(t, err, clues.ToCore(err)) require.NoError(t, err, clues.ToCore(err))
bod.ctrl = cleanCtrl bod.ctrl = cleanCtrl
@ -785,7 +785,7 @@ func (suite *OneDriveBackupIntgSuite) TestBackup_Run_oneDriveOwnerMigration() {
var ( var (
acct = tconfig.NewM365Account(t) acct = tconfig.NewM365Account(t)
opts = control.Defaults() opts = control.DefaultOptions()
mb = evmock.NewBus() mb = evmock.NewBus()
categories = map[path.CategoryType][]string{ categories = map[path.CategoryType][]string{
@ -801,10 +801,10 @@ func (suite *OneDriveBackupIntgSuite) TestBackup_Run_oneDriveOwnerMigration() {
acct, acct,
resource.Users, resource.Users,
path.OneDriveService, path.OneDriveService,
control.Defaults()) control.DefaultOptions())
require.NoError(t, err, clues.ToCore(err)) require.NoError(t, err, clues.ToCore(err))
userable, err := ctrl.AC.Users().GetByID(ctx, suite.its.userID) userable, err := ctrl.AC.Users().GetByID(ctx, suite.its.user.ID)
require.NoError(t, err, clues.ToCore(err)) require.NoError(t, err, clues.ToCore(err))
uid := ptr.Val(userable.GetId()) uid := ptr.Val(userable.GetId())
@ -922,7 +922,7 @@ func (suite *OneDriveBackupIntgSuite) TestBackup_Run_oneDriveExtensions() {
osel = selectors.NewOneDriveBackup([]string{userID}) osel = selectors.NewOneDriveBackup([]string{userID})
ws = deeTD.DriveIDFromRepoRef ws = deeTD.DriveIDFromRepoRef
svc = path.OneDriveService svc = path.OneDriveService
opts = control.Defaults() opts = control.DefaultOptions()
) )
opts.ItemExtensionFactory = getTestExtensionFactories() opts.ItemExtensionFactory = getTestExtensionFactories()
@ -982,17 +982,17 @@ func (suite *OneDriveRestoreIntgSuite) SetupSuite() {
} }
func (suite *OneDriveRestoreIntgSuite) TestRestore_Run_onedriveWithAdvancedOptions() { func (suite *OneDriveRestoreIntgSuite) TestRestore_Run_onedriveWithAdvancedOptions() {
sel := selectors.NewOneDriveBackup([]string{suite.its.userID}) sel := selectors.NewOneDriveBackup([]string{suite.its.user.ID})
sel.Include(selTD.OneDriveBackupFolderScope(sel)) sel.Include(selTD.OneDriveBackupFolderScope(sel))
sel.DiscreteOwner = suite.its.userID sel.DiscreteOwner = suite.its.user.ID
runDriveRestoreWithAdvancedOptions( runDriveRestoreWithAdvancedOptions(
suite.T(), suite.T(),
suite, suite,
suite.its.ac, suite.its.ac,
sel.Selector, sel.Selector,
suite.its.userDriveID, suite.its.user.DriveID,
suite.its.userDriveRootFolderID) suite.its.user.DriveRootFolderID)
} }
func runDriveRestoreWithAdvancedOptions( func runDriveRestoreWithAdvancedOptions(
@ -1009,7 +1009,7 @@ func runDriveRestoreWithAdvancedOptions(
var ( var (
mb = evmock.NewBus() mb = evmock.NewBus()
opts = control.Defaults() opts = control.DefaultOptions()
) )
bo, bod := prepNewTestBackupOp(t, ctx, mb, sel, opts, version.Backup) bo, bod := prepNewTestBackupOp(t, ctx, mb, sel, opts, version.Backup)
@ -1250,3 +1250,173 @@ func runDriveRestoreWithAdvancedOptions(
assert.Subset(t, maps.Keys(currentFileIDs), maps.Keys(fileIDs), "original item should exist after copy") assert.Subset(t, maps.Keys(currentFileIDs), maps.Keys(fileIDs), "original item should exist after copy")
}) })
} }
func (suite *OneDriveRestoreIntgSuite) TestRestore_Run_onedriveAlternateProtectedResource() {
sel := selectors.NewOneDriveBackup([]string{suite.its.user.ID})
sel.Include(selTD.OneDriveBackupFolderScope(sel))
sel.DiscreteOwner = suite.its.user.ID
runDriveRestoreToAlternateProtectedResource(
suite.T(),
suite,
suite.its.ac,
sel.Selector,
suite.its.user,
suite.its.secondaryUser)
}
func runDriveRestoreToAlternateProtectedResource(
t *testing.T,
suite tester.Suite,
ac api.Client,
sel selectors.Selector, // owner should match 'from', both Restore and Backup types work.
from, to ids,
) {
ctx, flush := tester.NewContext(t)
defer flush()
// a backup is required to run restores
var (
mb = evmock.NewBus()
opts = control.DefaultOptions()
)
bo, bod := prepNewTestBackupOp(t, ctx, mb, sel, opts, version.Backup)
defer bod.close(t, ctx)
runAndCheckBackup(t, ctx, &bo, mb, false)
var (
restoreCfg = ctrlTD.DefaultRestoreConfig("drive_restore_to_resource")
fromCollisionKeys map[string]api.DriveItemIDType
fromItemIDs map[string]api.DriveItemIDType
acd = ac.Drives()
)
// first restore to the 'from' resource
suite.Run("restore original resource", func() {
mb = evmock.NewBus()
fromCtr := count.New()
driveID := from.DriveID
rootFolderID := from.DriveRootFolderID
restoreCfg.OnCollision = control.Copy
ro, _ := prepNewTestRestoreOp(
t,
ctx,
bod.st,
bo.Results.BackupID,
mb,
fromCtr,
sel,
opts,
restoreCfg)
runAndCheckRestore(t, ctx, &ro, mb, false)
// get all files in folder, use these as the base
// set of files to compare against.
fromItemIDs, fromCollisionKeys = getDriveCollKeysAndItemIDs(
t,
ctx,
acd,
driveID,
rootFolderID,
restoreCfg.Location,
selTD.TestFolderName)
})
// then restore to the 'to' resource
var (
toCollisionKeys map[string]api.DriveItemIDType
toItemIDs map[string]api.DriveItemIDType
)
suite.Run("restore to alternate resource", func() {
mb = evmock.NewBus()
toCtr := count.New()
driveID := to.DriveID
rootFolderID := to.DriveRootFolderID
restoreCfg.ProtectedResource = to.ID
ro, _ := prepNewTestRestoreOp(
t,
ctx,
bod.st,
bo.Results.BackupID,
mb,
toCtr,
sel,
opts,
restoreCfg)
runAndCheckRestore(t, ctx, &ro, mb, false)
// get all files in folder, use these as the base
// set of files to compare against.
toItemIDs, toCollisionKeys = getDriveCollKeysAndItemIDs(
t,
ctx,
acd,
driveID,
rootFolderID,
restoreCfg.Location,
selTD.TestFolderName)
})
// compare restore results
assert.Equal(t, len(fromItemIDs), len(toItemIDs))
assert.ElementsMatch(t, maps.Keys(fromCollisionKeys), maps.Keys(toCollisionKeys))
}
type GetItemsKeysAndFolderByNameer interface {
GetItemIDsInContainer(
ctx context.Context,
driveID, containerID string,
) (map[string]api.DriveItemIDType, error)
GetFolderByName(
ctx context.Context,
driveID, parentFolderID, folderName string,
) (models.DriveItemable, error)
GetItemsInContainerByCollisionKey(
ctx context.Context,
driveID, containerID string,
) (map[string]api.DriveItemIDType, error)
}
func getDriveCollKeysAndItemIDs(
t *testing.T,
ctx context.Context, //revive:disable-line:context-as-argument
gikafbn GetItemsKeysAndFolderByNameer,
driveID, parentContainerID string,
containerNames ...string,
) (map[string]api.DriveItemIDType, map[string]api.DriveItemIDType) {
var (
c models.DriveItemable
err error
cID string
)
for _, cn := range containerNames {
pcid := parentContainerID
if len(cID) != 0 {
pcid = cID
}
c, err = gikafbn.GetFolderByName(ctx, driveID, pcid, cn)
require.NoError(t, err, clues.ToCore(err))
cID = ptr.Val(c.GetId())
}
itemIDs, err := gikafbn.GetItemIDsInContainer(ctx, driveID, cID)
require.NoError(t, err, clues.ToCore(err))
collisionKeys, err := gikafbn.GetItemsInContainerByCollisionKey(ctx, driveID, cID)
require.NoError(t, err, clues.ToCore(err))
return itemIDs, collisionKeys
}

View File

@ -5,6 +5,9 @@ import (
"testing" "testing"
"github.com/alcionai/clues" "github.com/alcionai/clues"
"github.com/google/uuid"
"github.com/microsoftgraph/msgraph-sdk-go/models"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"github.com/stretchr/testify/suite" "github.com/stretchr/testify/suite"
@ -19,6 +22,8 @@ import (
"github.com/alcionai/corso/src/internal/version" "github.com/alcionai/corso/src/internal/version"
deeTD "github.com/alcionai/corso/src/pkg/backup/details/testdata" deeTD "github.com/alcionai/corso/src/pkg/backup/details/testdata"
"github.com/alcionai/corso/src/pkg/control" "github.com/alcionai/corso/src/pkg/control"
ctrlTD "github.com/alcionai/corso/src/pkg/control/testdata"
"github.com/alcionai/corso/src/pkg/count"
"github.com/alcionai/corso/src/pkg/path" "github.com/alcionai/corso/src/pkg/path"
"github.com/alcionai/corso/src/pkg/selectors" "github.com/alcionai/corso/src/pkg/selectors"
selTD "github.com/alcionai/corso/src/pkg/selectors/testdata" selTD "github.com/alcionai/corso/src/pkg/selectors/testdata"
@ -44,7 +49,7 @@ func (suite *SharePointBackupIntgSuite) SetupSuite() {
} }
func (suite *SharePointBackupIntgSuite) TestBackup_Run_incrementalSharePoint() { func (suite *SharePointBackupIntgSuite) TestBackup_Run_incrementalSharePoint() {
sel := selectors.NewSharePointRestore([]string{suite.its.siteID}) sel := selectors.NewSharePointRestore([]string{suite.its.site.ID})
ic := func(cs []string) selectors.Selector { ic := func(cs []string) selectors.Selector {
sel.Include(sel.LibraryFolders(cs, selectors.PrefixMatch())) sel.Include(sel.LibraryFolders(cs, selectors.PrefixMatch()))
@ -55,10 +60,10 @@ func (suite *SharePointBackupIntgSuite) TestBackup_Run_incrementalSharePoint() {
t *testing.T, t *testing.T,
ctx context.Context, ctx context.Context,
) string { ) string {
d, err := suite.its.ac.Sites().GetDefaultDrive(ctx, suite.its.siteID) d, err := suite.its.ac.Sites().GetDefaultDrive(ctx, suite.its.site.ID)
if err != nil { if err != nil {
err = graph.Wrap(ctx, err, "retrieving default site drive"). err = graph.Wrap(ctx, err, "retrieving default site drive").
With("site", suite.its.siteID) With("site", suite.its.site.ID)
} }
require.NoError(t, err, clues.ToCore(err)) require.NoError(t, err, clues.ToCore(err))
@ -75,8 +80,8 @@ func (suite *SharePointBackupIntgSuite) TestBackup_Run_incrementalSharePoint() {
runDriveIncrementalTest( runDriveIncrementalTest(
suite, suite,
suite.its.siteID, suite.its.site.ID,
suite.its.userID, suite.its.user.ID,
resource.Sites, resource.Sites,
path.SharePointService, path.SharePointService,
path.LibrariesCategory, path.LibrariesCategory,
@ -94,8 +99,8 @@ func (suite *SharePointBackupIntgSuite) TestBackup_Run_sharePoint() {
var ( var (
mb = evmock.NewBus() mb = evmock.NewBus()
sel = selectors.NewSharePointBackup([]string{suite.its.siteID}) sel = selectors.NewSharePointBackup([]string{suite.its.site.ID})
opts = control.Defaults() opts = control.DefaultOptions()
) )
sel.Include(selTD.SharePointBackupFolderScope(sel)) sel.Include(selTD.SharePointBackupFolderScope(sel))
@ -111,7 +116,7 @@ func (suite *SharePointBackupIntgSuite) TestBackup_Run_sharePoint() {
bod.sw, bod.sw,
&bo, &bo,
bod.sel, bod.sel,
suite.its.siteID, suite.its.site.ID,
path.LibrariesCategory) path.LibrariesCategory)
} }
@ -123,8 +128,8 @@ func (suite *SharePointBackupIntgSuite) TestBackup_Run_sharePointExtensions() {
var ( var (
mb = evmock.NewBus() mb = evmock.NewBus()
sel = selectors.NewSharePointBackup([]string{suite.its.siteID}) sel = selectors.NewSharePointBackup([]string{suite.its.site.ID})
opts = control.Defaults() opts = control.DefaultOptions()
tenID = tconfig.M365TenantID(t) tenID = tconfig.M365TenantID(t)
svc = path.SharePointService svc = path.SharePointService
ws = deeTD.DriveIDFromRepoRef ws = deeTD.DriveIDFromRepoRef
@ -145,7 +150,7 @@ func (suite *SharePointBackupIntgSuite) TestBackup_Run_sharePointExtensions() {
bod.sw, bod.sw,
&bo, &bo,
bod.sel, bod.sel,
suite.its.siteID, suite.its.site.ID,
path.LibrariesCategory) path.LibrariesCategory)
bID := bo.Results.BackupID bID := bo.Results.BackupID
@ -196,16 +201,268 @@ func (suite *SharePointRestoreIntgSuite) SetupSuite() {
} }
func (suite *SharePointRestoreIntgSuite) TestRestore_Run_sharepointWithAdvancedOptions() { func (suite *SharePointRestoreIntgSuite) TestRestore_Run_sharepointWithAdvancedOptions() {
sel := selectors.NewSharePointBackup([]string{suite.its.userID}) sel := selectors.NewSharePointBackup([]string{suite.its.site.ID})
sel.Include(selTD.SharePointBackupFolderScope(sel)) sel.Include(selTD.SharePointBackupFolderScope(sel))
sel.Filter(sel.Library("documents")) sel.Filter(sel.Library("documents"))
sel.DiscreteOwner = suite.its.siteID sel.DiscreteOwner = suite.its.site.ID
runDriveRestoreWithAdvancedOptions( runDriveRestoreWithAdvancedOptions(
suite.T(), suite.T(),
suite, suite,
suite.its.ac, suite.its.ac,
sel.Selector, sel.Selector,
suite.its.siteDriveID, suite.its.site.DriveID,
suite.its.siteDriveRootFolderID) suite.its.site.DriveRootFolderID)
}
func (suite *SharePointRestoreIntgSuite) TestRestore_Run_sharepointAlternateProtectedResource() {
sel := selectors.NewSharePointBackup([]string{suite.its.site.ID})
sel.Include(selTD.SharePointBackupFolderScope(sel))
sel.Filter(sel.Library("documents"))
sel.DiscreteOwner = suite.its.site.ID
runDriveRestoreToAlternateProtectedResource(
suite.T(),
suite,
suite.its.ac,
sel.Selector,
suite.its.site,
suite.its.secondarySite)
}
func (suite *SharePointRestoreIntgSuite) TestRestore_Run_sharepointDeletedDrives() {
t := suite.T()
// despite the client having a method for drive.Patch and drive.Delete, both only return
// the error code and message `invalidRequest`.
t.Skip("graph api doesn't allow patch or delete on drives, so we cannot run any conditions")
ctx, flush := tester.NewContext(t)
defer flush()
rc := ctrlTD.DefaultRestoreConfig("restore_deleted_drives")
rc.OnCollision = control.Copy
// create a new drive
md, err := suite.its.ac.Lists().PostDrive(ctx, suite.its.site.ID, rc.Location)
require.NoError(t, err, clues.ToCore(err))
driveID := ptr.Val(md.GetId())
// get the root folder
mdi, err := suite.its.ac.Drives().GetRootFolder(ctx, driveID)
require.NoError(t, err, clues.ToCore(err))
rootFolderID := ptr.Val(mdi.GetId())
// add an item to it
itemName := uuid.NewString()
item := models.NewDriveItem()
item.SetName(ptr.To(itemName + ".txt"))
file := models.NewFile()
item.SetFile(file)
_, err = suite.its.ac.Drives().PostItemInContainer(
ctx,
driveID,
rootFolderID,
item,
control.Copy)
require.NoError(t, err, clues.ToCore(err))
// run a backup
var (
mb = evmock.NewBus()
opts = control.DefaultOptions()
graphClient = suite.its.ac.Stable.Client()
)
bsel := selectors.NewSharePointBackup([]string{suite.its.site.ID})
bsel.Include(selTD.SharePointBackupFolderScope(bsel))
bsel.Filter(bsel.Library(rc.Location))
bsel.DiscreteOwner = suite.its.site.ID
bo, bod := prepNewTestBackupOp(t, ctx, mb, bsel.Selector, opts, version.Backup)
defer bod.close(t, ctx)
runAndCheckBackup(t, ctx, &bo, mb, false)
// test cases:
// first test, we take the current drive and rename it.
// the restore should find the drive by id and restore items
// into it like normal. Due to collision handling, this should
// create a copy of the current item.
suite.Run("renamed drive", func() {
t := suite.T()
ctx, flush := tester.NewContext(t)
defer flush()
patchBody := models.NewDrive()
patchBody.SetName(ptr.To("some other name"))
md, err = graphClient.
Drives().
ByDriveId(driveID).
Patch(ctx, patchBody, nil)
require.NoError(t, err, clues.ToCore(graph.Stack(ctx, err)))
var (
mb = evmock.NewBus()
ctr = count.New()
)
ro, _ := prepNewTestRestoreOp(
t,
ctx,
bod.st,
bo.Results.BackupID,
mb,
ctr,
bod.sel,
opts,
rc)
runAndCheckRestore(t, ctx, &ro, mb, false)
assert.Equal(t, 1, ctr.Get(count.NewItemCreated), "restored an item")
resp, err := graphClient.
Drives().
ByDriveId(driveID).
Items().
ByDriveItemId(rootFolderID).
Children().
Get(ctx, nil)
require.NoError(t, err, clues.ToCore(graph.Stack(ctx, err)))
items := resp.GetValue()
assert.Len(t, items, 2)
for _, item := range items {
assert.Contains(t, ptr.Val(item.GetName()), itemName)
}
})
// second test, we delete the drive altogether. the restore should find
// no existing drives, but it should have the old drive's name and attempt
// to recreate that drive by name.
suite.Run("deleted drive", func() {
t := suite.T()
ctx, flush := tester.NewContext(t)
defer flush()
err = graphClient.
Drives().
ByDriveId(driveID).
Delete(ctx, nil)
require.NoError(t, err, clues.ToCore(graph.Stack(ctx, err)))
var (
mb = evmock.NewBus()
ctr = count.New()
)
ro, _ := prepNewTestRestoreOp(
t,
ctx,
bod.st,
bo.Results.BackupID,
mb,
ctr,
bod.sel,
opts,
rc)
runAndCheckRestore(t, ctx, &ro, mb, false)
assert.Equal(t, 1, ctr.Get(count.NewItemCreated), "restored an item")
pgr := suite.its.ac.
Drives().
NewSiteDrivePager(suite.its.site.ID, []string{"id", "name"})
drives, err := api.GetAllDrives(ctx, pgr, false, -1)
require.NoError(t, err, clues.ToCore(err))
var created models.Driveable
for _, drive := range drives {
if ptr.Val(drive.GetName()) == ptr.Val(created.GetName()) &&
ptr.Val(drive.GetId()) != driveID {
created = drive
break
}
}
require.NotNil(t, created, "found the restored drive by name")
md = created
driveID = ptr.Val(md.GetId())
mdi, err := suite.its.ac.Drives().GetRootFolder(ctx, driveID)
require.NoError(t, err, clues.ToCore(err))
rootFolderID = ptr.Val(mdi.GetId())
resp, err := graphClient.
Drives().
ByDriveId(driveID).
Items().
ByDriveItemId(rootFolderID).
Children().
Get(ctx, nil)
require.NoError(t, err, clues.ToCore(graph.Stack(ctx, err)))
items := resp.GetValue()
assert.Len(t, items, 1)
assert.Equal(t, ptr.Val(items[0].GetName()), itemName+".txt")
})
// final test, run a follow-up restore. This should match the
// drive we created in the prior test by name, but not by ID.
suite.Run("different drive - same name", func() {
t := suite.T()
ctx, flush := tester.NewContext(t)
defer flush()
var (
mb = evmock.NewBus()
ctr = count.New()
)
ro, _ := prepNewTestRestoreOp(
t,
ctx,
bod.st,
bo.Results.BackupID,
mb,
ctr,
bod.sel,
opts,
rc)
runAndCheckRestore(t, ctx, &ro, mb, false)
assert.Equal(t, 1, ctr.Get(count.NewItemCreated), "restored an item")
resp, err := graphClient.
Drives().
ByDriveId(driveID).
Items().
ByDriveItemId(rootFolderID).
Children().
Get(ctx, nil)
require.NoError(t, err, clues.ToCore(graph.Stack(ctx, err)))
items := resp.GetValue()
assert.Len(t, items, 2)
for _, item := range items {
assert.Contains(t, ptr.Val(item.GetName()), itemName)
}
})
} }

View File

@ -23,6 +23,7 @@ const (
// M365 config // M365 config
TestCfgAzureTenantID = "azure_tenantid" TestCfgAzureTenantID = "azure_tenantid"
TestCfgSecondarySiteID = "secondarym365siteid"
TestCfgSiteID = "m365siteid" TestCfgSiteID = "m365siteid"
TestCfgSiteURL = "m365siteurl" TestCfgSiteURL = "m365siteurl"
TestCfgUserID = "m365userid" TestCfgUserID = "m365userid"
@ -36,13 +37,14 @@ const (
// test specific env vars // test specific env vars
const ( const (
EnvCorsoM365LoadTestUserID = "CORSO_M365_LOAD_TEST_USER_ID"
EnvCorsoM365LoadTestOrgUsers = "CORSO_M365_LOAD_TEST_ORG_USERS"
EnvCorsoM365TestSiteID = "CORSO_M365_TEST_SITE_ID" EnvCorsoM365TestSiteID = "CORSO_M365_TEST_SITE_ID"
EnvCorsoM365TestSiteURL = "CORSO_M365_TEST_SITE_URL" EnvCorsoM365TestSiteURL = "CORSO_M365_TEST_SITE_URL"
EnvCorsoM365TestUserID = "CORSO_M365_TEST_USER_ID" EnvCorsoM365TestUserID = "CORSO_M365_TEST_USER_ID"
EnvCorsoSecondaryM365TestSiteID = "CORSO_SECONDARY_M365_TEST_SITE_ID"
EnvCorsoSecondaryM365TestUserID = "CORSO_SECONDARY_M365_TEST_USER_ID" EnvCorsoSecondaryM365TestUserID = "CORSO_SECONDARY_M365_TEST_USER_ID"
EnvCorsoTertiaryM365TestUserID = "CORSO_TERTIARY_M365_TEST_USER_ID" EnvCorsoTertiaryM365TestUserID = "CORSO_TERTIARY_M365_TEST_USER_ID"
EnvCorsoM365LoadTestUserID = "CORSO_M365_LOAD_TEST_USER_ID"
EnvCorsoM365LoadTestOrgUsers = "CORSO_M365_LOAD_TEST_ORG_USERS"
EnvCorsoTestConfigFilePath = "CORSO_TEST_CONFIG_FILE" EnvCorsoTestConfigFilePath = "CORSO_TEST_CONFIG_FILE"
EnvCorsoUnlicensedM365TestUserID = "CORSO_M365_TEST_UNLICENSED_USER" EnvCorsoUnlicensedM365TestUserID = "CORSO_M365_TEST_UNLICENSED_USER"
) )
@ -147,13 +149,19 @@ func ReadTestConfig() (map[string]string, error) {
TestCfgSiteID, TestCfgSiteID,
os.Getenv(EnvCorsoM365TestSiteID), os.Getenv(EnvCorsoM365TestSiteID),
vpr.GetString(TestCfgSiteID), vpr.GetString(TestCfgSiteID),
"10rqc2.sharepoint.com,4892edf5-2ebf-46be-a6e5-a40b2cbf1c1a,38ab6d06-fc82-4417-af93-22d8733c22be") "4892edf5-2ebf-46be-a6e5-a40b2cbf1c1a,38ab6d06-fc82-4417-af93-22d8733c22be")
fallbackTo( fallbackTo(
testEnv, testEnv,
TestCfgSiteURL, TestCfgSiteURL,
os.Getenv(EnvCorsoM365TestSiteURL), os.Getenv(EnvCorsoM365TestSiteURL),
vpr.GetString(TestCfgSiteURL), vpr.GetString(TestCfgSiteURL),
"https://10rqc2.sharepoint.com/sites/CorsoCI") "https://10rqc2.sharepoint.com/sites/CorsoCI")
fallbackTo(
testEnv,
TestCfgSecondarySiteID,
os.Getenv(EnvCorsoSecondaryM365TestSiteID),
vpr.GetString(TestCfgSecondarySiteID),
"053684d8-ca6c-4376-a03e-2567816bb091,9b3e9abe-6a5e-4084-8b44-ea5a356fe02c")
fallbackTo( fallbackTo(
testEnv, testEnv,
TestCfgUnlicensedUserID, TestCfgUnlicensedUserID,

View File

@ -198,6 +198,17 @@ func GetM365SiteID(ctx context.Context) string {
return strings.ToLower(cfg[TestCfgSiteID]) return strings.ToLower(cfg[TestCfgSiteID])
} }
// SecondaryM365SiteID returns a siteID string representing the secondarym365SiteID described
// by either the env var CORSO_SECONDARY_M365_TEST_SITE_ID, the corso_test.toml config
// file or the default value (in that order of priority). The default is a
// last-attempt fallback that will only work on alcion's testing org.
func SecondaryM365SiteID(t *testing.T) string {
cfg, err := ReadTestConfig()
require.NoError(t, err, "retrieving secondary m365 site id from test configuration: %+v", clues.ToCore(err))
return strings.ToLower(cfg[TestCfgSecondarySiteID])
}
// UnlicensedM365UserID returns an userID string representing the m365UserID // UnlicensedM365UserID returns an userID string representing the m365UserID
// described by either the env var CORSO_M365_TEST_UNLICENSED_USER, the // described by either the env var CORSO_M365_TEST_UNLICENSED_USER, the
// corso_test.toml config file or the default value (in that order of priority). // corso_test.toml config file or the default value (in that order of priority).

View File

@ -9,7 +9,6 @@ import (
type Options struct { type Options struct {
DisableMetrics bool `json:"disableMetrics"` DisableMetrics bool `json:"disableMetrics"`
FailureHandling FailurePolicy `json:"failureHandling"` FailureHandling FailurePolicy `json:"failureHandling"`
RestorePermissions bool `json:"restorePermissions"`
SkipReduce bool `json:"skipReduce"` SkipReduce bool `json:"skipReduce"`
ToggleFeatures Toggles `json:"toggleFeatures"` ToggleFeatures Toggles `json:"toggleFeatures"`
Parallelism Parallelism `json:"parallelism"` Parallelism Parallelism `json:"parallelism"`
@ -35,8 +34,8 @@ const (
BestEffort FailurePolicy = "best-effort" BestEffort FailurePolicy = "best-effort"
) )
// Defaults provides an Options with the default values set. // DefaultOptions provides an Options with the default values set.
func Defaults() Options { func DefaultOptions() Options {
return Options{ return Options{
FailureHandling: FailAfterRecovery, FailureHandling: FailAfterRecovery,
ToggleFeatures: Toggles{}, ToggleFeatures: Toggles{},

View File

@ -52,10 +52,15 @@ type RestoreConfig struct {
// Defaults to "Corso_Restore_<current_dttm>" // Defaults to "Corso_Restore_<current_dttm>"
Location string Location string
// Drive specifies the drive into which the data will be restored. // Drive specifies the name of the drive into which the data will be
// If empty, data is restored to the same drive that was backed up. // restored. If empty, data is restored to the same drive that was backed
// up.
// Defaults to empty. // Defaults to empty.
Drive string Drive string
// IncludePermissions toggles whether the restore will include the original
// folder- and item-level permissions.
IncludePermissions bool
} }
func DefaultRestoreConfig(timeFormat dttm.TimeFormat) RestoreConfig { func DefaultRestoreConfig(timeFormat dttm.TimeFormat) RestoreConfig {
@ -65,6 +70,10 @@ func DefaultRestoreConfig(timeFormat dttm.TimeFormat) RestoreConfig {
} }
} }
func DefaultRestoreContainerName(timeFormat dttm.TimeFormat) string {
return defaultRestoreLocation + dttm.FormatNow(timeFormat)
}
// EnsureRestoreConfigDefaults sets all non-supported values in the config // EnsureRestoreConfigDefaults sets all non-supported values in the config
// struct to the default value. // struct to the default value.
func EnsureRestoreConfigDefaults( func EnsureRestoreConfigDefaults(

View File

@ -329,7 +329,7 @@ func (r repository) NewBackupWithLookup(
return operations.BackupOperation{}, clues.Wrap(err, "connecting to m365") return operations.BackupOperation{}, clues.Wrap(err, "connecting to m365")
} }
ownerID, ownerName, err := ctrl.PopulateOwnerIDAndNamesFrom(ctx, sel.DiscreteOwner, ins) ownerID, ownerName, err := ctrl.PopulateProtectedResourceIDAndName(ctx, sel.DiscreteOwner, ins)
if err != nil { if err != nil {
return operations.BackupOperation{}, clues.Wrap(err, "resolving resource owner details") return operations.BackupOperation{}, clues.Wrap(err, "resolving resource owner details")
} }

View File

@ -60,7 +60,7 @@ func (suite *RepositoryUnitSuite) TestInitialize() {
st, err := test.storage() st, err := test.storage()
assert.NoError(t, err, clues.ToCore(err)) assert.NoError(t, err, clues.ToCore(err))
_, err = Initialize(ctx, test.account, st, control.Defaults()) _, err = Initialize(ctx, test.account, st, control.DefaultOptions())
test.errCheck(t, err, clues.ToCore(err)) test.errCheck(t, err, clues.ToCore(err))
}) })
} }
@ -94,7 +94,7 @@ func (suite *RepositoryUnitSuite) TestConnect() {
st, err := test.storage() st, err := test.storage()
assert.NoError(t, err, clues.ToCore(err)) assert.NoError(t, err, clues.ToCore(err))
_, err = Connect(ctx, test.account, st, "not_found", control.Defaults()) _, err = Connect(ctx, test.account, st, "not_found", control.DefaultOptions())
test.errCheck(t, err, clues.ToCore(err)) test.errCheck(t, err, clues.ToCore(err))
}) })
} }
@ -137,7 +137,7 @@ func (suite *RepositoryIntegrationSuite) TestInitialize() {
defer flush() defer flush()
st := test.storage(t) st := test.storage(t)
r, err := Initialize(ctx, test.account, st, control.Defaults()) r, err := Initialize(ctx, test.account, st, control.DefaultOptions())
if err == nil { if err == nil {
defer func() { defer func() {
err := r.Close(ctx) err := r.Close(ctx)
@ -186,11 +186,11 @@ func (suite *RepositoryIntegrationSuite) TestConnect() {
// need to initialize the repository before we can test connecting to it. // need to initialize the repository before we can test connecting to it.
st := storeTD.NewPrefixedS3Storage(t) st := storeTD.NewPrefixedS3Storage(t)
repo, err := Initialize(ctx, account.Account{}, st, control.Defaults()) repo, err := Initialize(ctx, account.Account{}, st, control.DefaultOptions())
require.NoError(t, err, clues.ToCore(err)) require.NoError(t, err, clues.ToCore(err))
// now re-connect // now re-connect
_, err = Connect(ctx, account.Account{}, st, repo.GetID(), control.Defaults()) _, err = Connect(ctx, account.Account{}, st, repo.GetID(), control.DefaultOptions())
assert.NoError(t, err, clues.ToCore(err)) assert.NoError(t, err, clues.ToCore(err))
} }
@ -203,7 +203,7 @@ func (suite *RepositoryIntegrationSuite) TestConnect_sameID() {
// need to initialize the repository before we can test connecting to it. // need to initialize the repository before we can test connecting to it.
st := storeTD.NewPrefixedS3Storage(t) st := storeTD.NewPrefixedS3Storage(t)
r, err := Initialize(ctx, account.Account{}, st, control.Defaults()) r, err := Initialize(ctx, account.Account{}, st, control.DefaultOptions())
require.NoError(t, err, clues.ToCore(err)) require.NoError(t, err, clues.ToCore(err))
oldID := r.GetID() oldID := r.GetID()
@ -212,7 +212,7 @@ func (suite *RepositoryIntegrationSuite) TestConnect_sameID() {
require.NoError(t, err, clues.ToCore(err)) require.NoError(t, err, clues.ToCore(err))
// now re-connect // now re-connect
r, err = Connect(ctx, account.Account{}, st, oldID, control.Defaults()) r, err = Connect(ctx, account.Account{}, st, oldID, control.DefaultOptions())
require.NoError(t, err, clues.ToCore(err)) require.NoError(t, err, clues.ToCore(err))
assert.Equal(t, oldID, r.GetID()) assert.Equal(t, oldID, r.GetID())
} }
@ -228,7 +228,7 @@ func (suite *RepositoryIntegrationSuite) TestNewBackup() {
// need to initialize the repository before we can test connecting to it. // need to initialize the repository before we can test connecting to it.
st := storeTD.NewPrefixedS3Storage(t) st := storeTD.NewPrefixedS3Storage(t)
r, err := Initialize(ctx, acct, st, control.Defaults()) r, err := Initialize(ctx, acct, st, control.DefaultOptions())
require.NoError(t, err, clues.ToCore(err)) require.NoError(t, err, clues.ToCore(err))
userID := tconfig.M365UserID(t) userID := tconfig.M365UserID(t)
@ -250,7 +250,7 @@ func (suite *RepositoryIntegrationSuite) TestNewRestore() {
// need to initialize the repository before we can test connecting to it. // need to initialize the repository before we can test connecting to it.
st := storeTD.NewPrefixedS3Storage(t) st := storeTD.NewPrefixedS3Storage(t)
r, err := Initialize(ctx, acct, st, control.Defaults()) r, err := Initialize(ctx, acct, st, control.DefaultOptions())
require.NoError(t, err, clues.ToCore(err)) require.NoError(t, err, clues.ToCore(err))
ro, err := r.NewRestore(ctx, "backup-id", selectors.Selector{DiscreteOwner: "test"}, restoreCfg) ro, err := r.NewRestore(ctx, "backup-id", selectors.Selector{DiscreteOwner: "test"}, restoreCfg)
@ -269,7 +269,7 @@ func (suite *RepositoryIntegrationSuite) TestNewMaintenance() {
// need to initialize the repository before we can test connecting to it. // need to initialize the repository before we can test connecting to it.
st := storeTD.NewPrefixedS3Storage(t) st := storeTD.NewPrefixedS3Storage(t)
r, err := Initialize(ctx, acct, st, control.Defaults()) r, err := Initialize(ctx, acct, st, control.DefaultOptions())
require.NoError(t, err, clues.ToCore(err)) require.NoError(t, err, clues.ToCore(err))
mo, err := r.NewMaintenance(ctx, ctrlRepo.Maintenance{}) mo, err := r.NewMaintenance(ctx, ctrlRepo.Maintenance{})
@ -286,7 +286,7 @@ func (suite *RepositoryIntegrationSuite) TestConnect_DisableMetrics() {
// need to initialize the repository before we can test connecting to it. // need to initialize the repository before we can test connecting to it.
st := storeTD.NewPrefixedS3Storage(t) st := storeTD.NewPrefixedS3Storage(t)
repo, err := Initialize(ctx, account.Account{}, st, control.Defaults()) repo, err := Initialize(ctx, account.Account{}, st, control.DefaultOptions())
require.NoError(t, err) require.NoError(t, err)
// now re-connect // now re-connect
@ -308,14 +308,14 @@ func (suite *RepositoryIntegrationSuite) Test_Options() {
{ {
name: "default options", name: "default options",
opts: func() control.Options { opts: func() control.Options {
return control.Defaults() return control.DefaultOptions()
}, },
expectedLen: 0, expectedLen: 0,
}, },
{ {
name: "options with an extension factory", name: "options with an extension factory",
opts: func() control.Options { opts: func() control.Options {
o := control.Defaults() o := control.DefaultOptions()
o.ItemExtensionFactory = append( o.ItemExtensionFactory = append(
o.ItemExtensionFactory, o.ItemExtensionFactory,
&extensions.MockItemExtensionFactory{}) &extensions.MockItemExtensionFactory{})
@ -327,7 +327,7 @@ func (suite *RepositoryIntegrationSuite) Test_Options() {
{ {
name: "options with multiple extension factories", name: "options with multiple extension factories",
opts: func() control.Options { opts: func() control.Options {
o := control.Defaults() o := control.DefaultOptions()
f := []extensions.CreateItemExtensioner{ f := []extensions.CreateItemExtensioner{
&extensions.MockItemExtensionFactory{}, &extensions.MockItemExtensionFactory{},
&extensions.MockItemExtensionFactory{}, &extensions.MockItemExtensionFactory{},

View File

@ -0,0 +1,64 @@
package api
import (
"context"
"github.com/alcionai/clues"
"github.com/microsoftgraph/msgraph-sdk-go/models"
"github.com/alcionai/corso/src/internal/common/ptr"
"github.com/alcionai/corso/src/internal/m365/graph"
)
// ---------------------------------------------------------------------------
// controller
// ---------------------------------------------------------------------------
func (c Client) Lists() Lists {
return Lists{c}
}
// Lists is an interface-compliant provider of the client.
type Lists struct {
Client
}
// PostDrive creates a new list of type drive. Specifically used to create
// documentLibraries for SharePoint Sites.
func (c Lists) PostDrive(
ctx context.Context,
siteID, driveName string,
) (models.Driveable, error) {
list := models.NewList()
list.SetDisplayName(&driveName)
list.SetDescription(ptr.To("corso auto-generated restore destination"))
li := models.NewListInfo()
li.SetTemplate(ptr.To("documentLibrary"))
list.SetList(li)
// creating a list of type documentLibrary will result in the creation
// of a new drive owned by the given site.
builder := c.Stable.
Client().
Sites().
BySiteId(siteID).
Lists()
newList, err := builder.Post(ctx, list, nil)
if graph.IsErrItemAlreadyExistsConflict(err) {
return nil, clues.Stack(graph.ErrItemAlreadyExistsConflict, err).WithClues(ctx)
}
if err != nil {
return nil, graph.Wrap(ctx, err, "creating documentLibrary list")
}
// drive information is not returned by the list creation.
drive, err := builder.
ByListId(ptr.Val(newList.GetId())).
Drive().
Get(ctx, nil)
return drive, graph.Wrap(ctx, err, "fetching created documentLibrary").OrNil()
}

View File

@ -0,0 +1,57 @@
package api_test
import (
"testing"
"github.com/alcionai/clues"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/stretchr/testify/suite"
"github.com/alcionai/corso/src/internal/common/ptr"
"github.com/alcionai/corso/src/internal/m365/graph"
"github.com/alcionai/corso/src/internal/tester"
"github.com/alcionai/corso/src/internal/tester/tconfig"
"github.com/alcionai/corso/src/pkg/control/testdata"
)
type ListsAPIIntgSuite struct {
tester.Suite
its intgTesterSetup
}
func (suite *ListsAPIIntgSuite) SetupSuite() {
suite.its = newIntegrationTesterSetup(suite.T())
}
func TestListsAPIIntgSuite(t *testing.T) {
suite.Run(t, &ListsAPIIntgSuite{
Suite: tester.NewIntegrationSuite(
t,
[][]string{tconfig.M365AcctCredEnvs}),
})
}
func (suite *ListsAPIIntgSuite) TestLists_PostDrive() {
t := suite.T()
ctx, flush := tester.NewContext(t)
defer flush()
var (
acl = suite.its.ac.Lists()
driveName = testdata.DefaultRestoreConfig("list_api_post_drive").Location
siteID = suite.its.siteID
)
// first post, should have no errors
list, err := acl.PostDrive(ctx, siteID, driveName)
require.NoError(t, err, clues.ToCore(err))
// the site name cannot be set when posting, only its DisplayName.
// so we double check here that we're still getting the name we expect.
assert.Equal(t, driveName, ptr.Val(list.GetName()))
// second post, same name, should error on name conflict]
_, err = acl.PostDrive(ctx, siteID, driveName)
require.ErrorIs(t, err, graph.ErrItemAlreadyExistsConflict, clues.ToCore(err))
}

View File

@ -108,3 +108,18 @@ the copy of`reports.txt` is named `reports 1.txt`.
Collisions will entirely replace the current version of the item with the backup Collisions will entirely replace the current version of the item with the backup
version. If multiple existing items collide with the backup item, only one of the version. If multiple existing items collide with the backup item, only one of the
existing items is replaced. existing items is replaced.
## To resource
The `--to-resource` flag lets you select which resource will receive the restored data.
A resource can be a mailbox, user, or sharepoint site.
<CodeBlock language="bash">{
`corso restore onedrive --backup abcd --to-resource adelev@alcion.ai`
}</CodeBlock>
### Limitations
* The resource must exist. Corso will not create new mailboxes, users, or sites.
* The resource must have access to the service being restored. No restore will be
performed for an unlicensed resource.

View File

@ -16,8 +16,6 @@ Below is a list of known Corso issues and limitations:
from M365 while a backup creation is running. from M365 while a backup creation is running.
The next backup creation will correct any missing data. The next backup creation will correct any missing data.
* SharePoint document library data can't be restored after the library has been deleted.
* Sharing information of items in OneDrive/SharePoint using sharing links aren't backed up and restored. * Sharing information of items in OneDrive/SharePoint using sharing links aren't backed up and restored.
* Permissions/Access given to a site group can't be restored. * Permissions/Access given to a site group can't be restored.