Merge branch 'main' into serialize-common
This commit is contained in:
commit
4daa86fcc7
2
.github/ISSUE_TEMPLATE/BUG-REPORT.yaml
vendored
2
.github/ISSUE_TEMPLATE/BUG-REPORT.yaml
vendored
@ -35,6 +35,6 @@ body:
|
||||
id: logs
|
||||
attributes:
|
||||
label: Relevant log output
|
||||
description: Please run Corso with `--log-level debug`.
|
||||
description: Please run Corso with `--log-level debug` and attach the log file.
|
||||
placeholder: This will be automatically formatted, so no need for backticks.
|
||||
render: shell
|
||||
|
||||
30
.github/workflows/ci.yml
vendored
30
.github/workflows/ci.yml
vendored
@ -419,11 +419,35 @@ jobs:
|
||||
RUDDERSTACK_CORSO_DATA_PLANE_URL: ${{ secrets.RUDDERSTACK_CORSO_DATA_PLANE_URL }}
|
||||
CORSO_VERSION: ${{ needs.SetEnv.outputs.version }}
|
||||
|
||||
- name: Upload assets
|
||||
- name: Upload darwin arm64
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: corso
|
||||
path: src/dist/*
|
||||
name: corso_Darwin_arm64
|
||||
path: src/dist/corso_darwin_arm64/corso
|
||||
|
||||
- name: Upload linux arm64
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: corso_Linux_arm64
|
||||
path: src/dist/corso_linux_arm64/corso
|
||||
|
||||
- name: Upload darwin amd64
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: corso_Darwin_amd64
|
||||
path: src/dist/corso_darwin_amd64_v1/corso
|
||||
|
||||
- name: Upload linux amd64
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: corso_Linux_amd64
|
||||
path: src/dist/corso_linux_amd64_v1/corso
|
||||
|
||||
- name: Upload windows amd64
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: corso_Windows_amd64
|
||||
path: src/dist/corso_windows_amd64_v1/corso.exe
|
||||
|
||||
Publish-Image:
|
||||
needs: [Test-Suite-Trusted, Linting, Website-Linting, SetEnv]
|
||||
|
||||
5
.github/workflows/ci_test_cleanup.yml
vendored
5
.github/workflows/ci_test_cleanup.yml
vendored
@ -16,8 +16,11 @@ jobs:
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- uses: actions/setup-go@v3
|
||||
with:
|
||||
go-version: '1.19'
|
||||
|
||||
# sets the maximimum time to now-30m.
|
||||
# sets the maximum time to now-30m.
|
||||
# CI test have a 10 minute timeout.
|
||||
# At 20 minutes ago, we should be safe from conflicts.
|
||||
# The additional 10 minutes is just to be good citizens.
|
||||
|
||||
14
CHANGELOG.md
14
CHANGELOG.md
@ -7,20 +7,30 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||
|
||||
## [Unreleased] (alpha)
|
||||
|
||||
### Fixed
|
||||
|
||||
- Check if the user specified for an exchange backup operation has a mailbox.
|
||||
|
||||
|
||||
## [v0.1.0] (alpha) - 2023-01-13
|
||||
|
||||
### Added
|
||||
|
||||
- Folder entries in backup details now indicate whether an item in the hierarchy was updated
|
||||
- Incremental backup support for exchange is now enabled by default.
|
||||
|
||||
### Changed
|
||||
|
||||
- The selectors Reduce() process will only include details that match the DiscreteOwner, if one is specified.
|
||||
- New selector constructors will automatically set the DiscreteOwner if given a single-item slice.
|
||||
- Write logs to disk by default ([#2082](https://github.com/alcionai/corso/pull/2082))
|
||||
|
||||
### Fixed
|
||||
|
||||
- Issue where repository connect progress bar was clobbering backup/restore operation output.
|
||||
- Issue where a `backup create exchange` produced one backup record per data type.
|
||||
- Specifying multiple users in a onedrive backup (ex: `--user a,b,c`) now properly delimits the input along the commas.
|
||||
- Updated the list of M365 SKUs used to check if a user has a OneDrive license.
|
||||
|
||||
### Known Issues
|
||||
|
||||
@ -30,7 +40,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||
|
||||
### Added
|
||||
|
||||
- Folder entries in backup details now indicate whether an item in the hierarchy was updated
|
||||
- Incremental backup support for Exchange ([#1777](https://github.com/alcionai/corso/issues/1777)). This is currently enabled by specifying the `--enable-incrementals`
|
||||
with the `backup create` command. This functionality will be enabled by default in an upcoming release.
|
||||
- Folder entries in backup details now include size and modified time for the hierarchy ([#1896](https://github.com/alcionai/corso/issues/1896))
|
||||
@ -114,7 +123,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||
- Miscellaneous
|
||||
- Optional usage statistics reporting ([RM-35](https://github.com/alcionai/corso-roadmap/issues/35))
|
||||
|
||||
[Unreleased]: https://github.com/alcionai/corso/compare/v0.0.4...HEAD
|
||||
[Unreleased]: https://github.com/alcionai/corso/compare/v0.1.0...HEAD
|
||||
[v0.1.0]: https://github.com/alcionai/corso/compare/v0.0.4...v0.1.0
|
||||
[v0.0.4]: https://github.com/alcionai/corso/compare/v0.0.3...v0.0.4
|
||||
[v0.0.3]: https://github.com/alcionai/corso/compare/v0.0.2...v0.0.3
|
||||
[v0.0.2]: https://github.com/alcionai/corso/compare/v0.0.1...v0.0.2
|
||||
|
||||
214
design/cli.md
214
design/cli.md
@ -1,214 +0,0 @@
|
||||
# CLI Commands
|
||||
## Status
|
||||
|
||||
Revision: v0.0.1
|
||||
|
||||
-----
|
||||
|
||||
|
||||
This is a proposal for Corso cli commands extrapolated from the Functional Requirements product documentation. Open questions are listed in the `Details & Discussion` section. The command set includes some p1/p2 actions for completeness. This proposal only intends to describe the available commands themselves and does not evaluate functionality or feature design beyond that goal.
|
||||
|
||||
# CLI Goals
|
||||
|
||||
- Ease (and enjoyment) of Use, more than minimal functionality.
|
||||
- Intended for use by Humans, not Computers.
|
||||
- Outputs should be either interactive/progressive (for ongoing work) or easily greppable/parseable.
|
||||
|
||||
## Todo/Undefined:
|
||||
|
||||
- Interactivity and sub-selection/helpful action completion within command operation.
|
||||
- Quality-of-life and niceties such as interactive/output display, formatting and presentation, or maximum minimization of user effort to run Corso.
|
||||
|
||||
-----
|
||||
## Commands
|
||||
|
||||
Standard format:
|
||||
`corso {command} [{subcommand}] [{service|repository}] [{flag}...]`
|
||||
|
||||
| Cmd | | | Flags | Notes |
|
||||
| --- | --- | --- | --- | --- |
|
||||
| version | | | | Same as `corso --version` |
|
||||
| | | | —version | Outputs Corso version details. |
|
||||
| help | | | | Same as `corso —-help` |
|
||||
| * | * | help | | Same as `{command} -—help` |
|
||||
| * | * | | —help | Same as `{command} help` |
|
||||
|
||||
| Cmd | | | Flags | Notes |
|
||||
| --- | --- | --- | --- | --- |
|
||||
| repo | * | | | Same as `repo [*] --help`. |
|
||||
| repo | init | {repository} | | Initialize a Corso repository. |
|
||||
| repo | init | {repository} | —tenant {azure_tenant_id} | Provides the account’s tenant ID. |
|
||||
| repo | init | {repository} | —client {azure_client_id} | Provides the account’s client ID. |
|
||||
| repo | connect | {repository} | | Connects to the specified repo. |
|
||||
| repo | configure | {repository} | | Sets mutable config properties to the provided values. |
|
||||
| repo | * | * | —config {cfg_file_path} | Specify a repo configuration file. Values may also be provided via individual flags and env vars. |
|
||||
| repo | * | * | —{config-prop} | Blanket commitment to support config via flags. |
|
||||
| repo | * | * | —credentials {creds_file_path} | Specify a file containing credentials or secrets. Values may also be provided via env vars. |
|
||||
|
||||
| Cmd | | | Flags | Notes |
|
||||
| --- | --- | --- | --- | --- |
|
||||
| backup | * | | | Same as backup [*] -—help |
|
||||
| backup | list | {service} | | List all backups in the repository for the specified service. |
|
||||
| backup | create | {service} | | Backup the specified service. |
|
||||
| backup | * | {service} | —token {token} | Provides a security key for permission to perform backup. |
|
||||
| backup | * | {service} | —{entity} {entity_id}... | Only involve the target entity(s). Entities are things like users, groups, sites, etc. Entity flag support is service-specific. |
|
||||
|
||||
| Cmd | | | Flags | Notes |
|
||||
| --- | --- | --- | --- | --- |
|
||||
| restore | | | | Same as `restore -—help` |
|
||||
| restore | {service} | | | Complete service restoration using the latest versioned backup. |
|
||||
| restore | {service} | | —backup {backup_id} | Restore data from only the targeted backup(s). |
|
||||
| restore | {service} | | —{entity} {entity_id}... | Only involve the target entity(s). Entities are things like users, groups, sites, etc. Entity flag support is service-specific. |
|
||||
---
|
||||
|
||||
|
||||
## Examples
|
||||
### Basic Usage
|
||||
|
||||
**First Run**
|
||||
|
||||
```bash
|
||||
$ export AZURE_CLIENT_SECRET=my_azure_secret
|
||||
$ export AWS_SECRET_ACCESS_KEY=my_s3_secret
|
||||
$ corso repo init s3 --bucket my_s3_bucket --access-key my_s3_key \
|
||||
--tenant my_azure_tenant_id --clientid my_azure_client_id
|
||||
$ corso backup express
|
||||
```
|
||||
|
||||
**Follow-up Actions**
|
||||
|
||||
```bash
|
||||
$ corso repo connect s3 --bucket my_s3_bucket --access-key my_s3_key
|
||||
$ corso backup express
|
||||
$ corso backup list express
|
||||
```
|
||||
-----
|
||||
|
||||
# Details & Discussion
|
||||
|
||||
## UC0 - CLI User Interface
|
||||
|
||||
Base command: `corso`
|
||||
|
||||
Standard format: `corso {command} [{subcommand}] [{service}] [{flag}...]`
|
||||
|
||||
Examples:
|
||||
|
||||
- `corso help`
|
||||
- `corso repo init --repository s3 --tenant t_1`
|
||||
- `corso backup create teams`
|
||||
- `corso restore teams --backup b_1`
|
||||
|
||||
## UC1 - Initialization and Connection
|
||||
|
||||
**Account Handling**
|
||||
|
||||
M365 accounts are paired with repo initialization, resulting in a single-tenancy storage. Any `repo` action applies the same behavior to the account as well. That is, `init` will handle all initialization steps for both the repository and the account, and both must succeed for the command to complete successfully, including all necessary validation checks. Likewise, `connect` will validate and establish a connection (or, at least, the ability to communicate) with both the account and the repository.
|
||||
|
||||
**Init**
|
||||
|
||||
`corso repo init {repository} --config {cfg} --credentials {creds}`
|
||||
|
||||
Initializes a repository, bootstrapping resources as necessary and storing configuration details within Corso. Repo is the name of the repository provider, eg: ‘s3’. Cfg and creds, in this example, point to json (or alternatively yaml?) files containing the details required to establish the connection. Configuration options, when known, will get support for flag-based declaration. Similarly, env vars will be supported as needed.
|
||||
|
||||
**Connection**
|
||||
|
||||
`corso repo connect {repository} --credentials {creds}`
|
||||
|
||||
[https://docs.flexera.com/flexera/EN/SaaSManager/M365CCIntegration.htm#integrations_3059193938_1840275](https://docs.flexera.com/flexera/EN/SaaSManager/M365CCIntegration.htm#integrations_3059193938_1840275)
|
||||
|
||||
Connects to an existing (ie, initialized) repository.
|
||||
|
||||
Corso is expected to gracefully handle transient disconnections during backup/restore runtimes (and otherwise, as needed).
|
||||
|
||||
**Deletion**
|
||||
|
||||
`corso repo delete {repository}`
|
||||
|
||||
(Included here for discussion, but not being added to the CLI command set at this time.)
|
||||
|
||||
Removes a repository from Corso. More exploration is needed here to explore cascading effects (or lack thereof) from the command. At minimum, expect additional user involvement to confirm that the deletion is wanted, and not erroneous.
|
||||
|
||||
## UC1.1 - Version
|
||||
|
||||
`corso --version` outputs the current version details such as: commit id and datetime, maybe semver (complete release version details to be decided).
|
||||
Further versioning controls are not currently covered in this proposal.
|
||||
|
||||
## UC2 - Configuration
|
||||
|
||||
`corso repo configure --reposiory {repo} --config {cfg}`
|
||||
|
||||
Updates the configuration details for an existing repository.
|
||||
|
||||
Configuration is divided between mutable and immutable properties. Generally, initialization-specific configurations (those that identify the storage repository, it’s connection, and its fundamental behavior), among other properties, are considered immutable and cannot be reconfigured. As a result, `repo configure` will not be able to rectify a misconfigured init; some other user flow will be needed to resolve that issue.
|
||||
|
||||
Configure allows mutation of config properties that can be safely and transiently applied. For example: backup retention and expiration policies. A complete list of how each property is classified is forthcoming as we build that list of properties.
|
||||
|
||||
## UC3 - On-Demand Backup
|
||||
|
||||
`corso backup` is reserved as a non-actionable command, rather than have it kick off a backup action. This is to ensure users don’t accidentally kick off a migration in the process of exploring the api. `corso backup` produces the same output as `corso backup --help`.
|
||||
|
||||
**Full Service Backup**
|
||||
|
||||
- `corso backup create {service}`
|
||||
|
||||
**Selective Backup**
|
||||
|
||||
- `corso backup create {service} --{entity} {entity_id}...`
|
||||
|
||||
Entities are service-applicable objects that match up to m365 objects. Users, groups, sites, mailboxes, etc. Entity flags are available on a per-service basis. For example, —site is available for the sharepoint service, and —mailbox for express, but not the reverse. A full list of system-entity mappings is coming in the future.
|
||||
|
||||
**Examples**
|
||||
|
||||
- `corso backup` → displays the help output.
|
||||
- `corso backup create teams` → generates a full backup of the teams service.
|
||||
- `corso backup create express --group g_1` → backs up the g_1 group within express.
|
||||
|
||||
## UC3.2 - Security Token
|
||||
|
||||
(This section is incomplete: further design details are needed about security expression.) Some commands, such as Backup/Restore require a security key declaration to verify that the caller has permission to perform the command.
|
||||
|
||||
`corso * * --token {token}`
|
||||
|
||||
## UC5 - Backup Ops
|
||||
|
||||
`corso backup list {service}`
|
||||
|
||||
Produces a list of the backups which currently exist in the repository.
|
||||
|
||||
`corso backup list {service} --{entity} {entity_id}...`
|
||||
|
||||
The list can be filtered to contain backups relevant to the specified entities. A possible user flow for restoration is for the user to use this to discover which backups match their needs, and then apply those backups in a restore operation.
|
||||
|
||||
**Expiration Control**
|
||||
|
||||
Will appear in a future revision.
|
||||
|
||||
## UC6 - Restore
|
||||
|
||||
Similar to backup, `corso restore` is reserved as a non-actionable command to serve up the same output as `corso restore —help`.
|
||||
|
||||
### UC6.1
|
||||
|
||||
**Full Service Restore**
|
||||
|
||||
- `corso restore {service} [--backup {backup_id}...]`
|
||||
|
||||
If no backups are specified, this defaults to the most recent backup of the specified service.
|
||||
|
||||
**Selective Restore**
|
||||
|
||||
- `corso restore {service} [--backup {backup_id}...] [--{entity} {entity_id}...]`
|
||||
|
||||
Entities are service-applicable objects that match up to m365 objects. Users, groups, sites, mailboxes, etc. Entity flags are available on a per-service basis. For example, —site is available for the sharepoint service, and —mailbox for express, but not the reverse. A full list of system-entity mappings is coming in the future.
|
||||
|
||||
**Examples**
|
||||
|
||||
- `corso restore` → displays the help output.
|
||||
- `corso restore teams` → restores all data in the teams service.
|
||||
- `corso restore sharepoint --backup b_1` → restores the sharepoint data in the b_1 backup.
|
||||
- `corso restore express --group g_1` → restores the g_1 group within sharepoint.
|
||||
|
||||
## UC6.2 - disaster recovery
|
||||
|
||||
Multi-service backup/restoration is still under review.
|
||||
@ -64,7 +64,7 @@ func BuildCommandTree(cmd *cobra.Command) {
|
||||
cmd.Flags().BoolP("version", "v", false, "current version info")
|
||||
cmd.PersistentPostRunE = config.InitFunc()
|
||||
config.AddConfigFlags(cmd)
|
||||
logger.AddLogLevelFlag(cmd)
|
||||
logger.AddLoggingFlags(cmd)
|
||||
observe.AddProgressBarFlags(cmd)
|
||||
print.AddOutputFlag(cmd)
|
||||
options.AddGlobalOperationFlags(cmd)
|
||||
@ -91,7 +91,9 @@ func Handle() {
|
||||
|
||||
BuildCommandTree(corsoCmd)
|
||||
|
||||
ctx, log := logger.Seed(ctx, logger.PreloadLogLevel())
|
||||
loglevel, logfile := logger.PreloadLoggingFlags()
|
||||
ctx, log := logger.Seed(ctx, loglevel, logfile)
|
||||
|
||||
defer func() {
|
||||
_ = log.Sync() // flush all logs in the buffer
|
||||
}()
|
||||
|
||||
@ -3,25 +3,12 @@ package main
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
. "github.com/alcionai/corso/src/cli/print"
|
||||
"github.com/alcionai/corso/src/internal/common"
|
||||
"github.com/alcionai/corso/src/internal/connector"
|
||||
"github.com/alcionai/corso/src/internal/connector/mockconnector"
|
||||
"github.com/alcionai/corso/src/internal/data"
|
||||
"github.com/alcionai/corso/src/pkg/account"
|
||||
"github.com/alcionai/corso/src/pkg/backup/details"
|
||||
"github.com/alcionai/corso/src/pkg/control"
|
||||
"github.com/alcionai/corso/src/pkg/credentials"
|
||||
"github.com/alcionai/corso/src/cmd/factory/impl"
|
||||
"github.com/alcionai/corso/src/pkg/logger"
|
||||
"github.com/alcionai/corso/src/pkg/path"
|
||||
"github.com/alcionai/corso/src/pkg/selectors"
|
||||
)
|
||||
|
||||
var factoryCmd = &cobra.Command{
|
||||
@ -42,17 +29,6 @@ var oneDriveCmd = &cobra.Command{
|
||||
RunE: handleOneDriveFactory,
|
||||
}
|
||||
|
||||
var (
|
||||
count int
|
||||
destination string
|
||||
tenant string
|
||||
user string
|
||||
)
|
||||
|
||||
// TODO: ErrGenerating = errors.New("not all items were successfully generated")
|
||||
|
||||
var ErrNotYetImplemeted = errors.New("not yet implemented")
|
||||
|
||||
// ------------------------------------------------------------------------------------------
|
||||
// CLI command handlers
|
||||
// ------------------------------------------------------------------------------------------
|
||||
@ -65,18 +41,18 @@ func main() {
|
||||
|
||||
// persistent flags that are common to all use cases
|
||||
fs := factoryCmd.PersistentFlags()
|
||||
fs.StringVar(&tenant, "tenant", "", "m365 tenant containing the user")
|
||||
fs.StringVar(&user, "user", "", "m365 user owning the new data")
|
||||
fs.StringVar(&impl.Tenant, "tenant", "", "m365 tenant containing the user")
|
||||
fs.StringVar(&impl.User, "user", "", "m365 user owning the new data")
|
||||
cobra.CheckErr(factoryCmd.MarkPersistentFlagRequired("user"))
|
||||
fs.IntVar(&count, "count", 0, "count of items to produce")
|
||||
fs.IntVar(&impl.Count, "count", 0, "count of items to produce")
|
||||
cobra.CheckErr(factoryCmd.MarkPersistentFlagRequired("count"))
|
||||
fs.StringVar(&destination, "destination", "", "destination of the new data (will create as needed)")
|
||||
fs.StringVar(&impl.Destination, "destination", "", "destination of the new data (will create as needed)")
|
||||
cobra.CheckErr(factoryCmd.MarkPersistentFlagRequired("destination"))
|
||||
|
||||
factoryCmd.AddCommand(exchangeCmd)
|
||||
addExchangeCommands(exchangeCmd)
|
||||
impl.AddExchangeCommands(exchangeCmd)
|
||||
factoryCmd.AddCommand(oneDriveCmd)
|
||||
addOneDriveCommands(oneDriveCmd)
|
||||
impl.AddOneDriveCommands(oneDriveCmd)
|
||||
|
||||
if err := factoryCmd.ExecuteContext(ctx); err != nil {
|
||||
logger.Flush(ctx)
|
||||
@ -85,180 +61,16 @@ func main() {
|
||||
}
|
||||
|
||||
func handleFactoryRoot(cmd *cobra.Command, args []string) error {
|
||||
Err(cmd.Context(), ErrNotYetImplemeted)
|
||||
Err(cmd.Context(), impl.ErrNotYetImplemeted)
|
||||
return cmd.Help()
|
||||
}
|
||||
|
||||
func handleExchangeFactory(cmd *cobra.Command, args []string) error {
|
||||
Err(cmd.Context(), ErrNotYetImplemeted)
|
||||
Err(cmd.Context(), impl.ErrNotYetImplemeted)
|
||||
return cmd.Help()
|
||||
}
|
||||
|
||||
func handleOneDriveFactory(cmd *cobra.Command, args []string) error {
|
||||
Err(cmd.Context(), ErrNotYetImplemeted)
|
||||
Err(cmd.Context(), impl.ErrNotYetImplemeted)
|
||||
return cmd.Help()
|
||||
}
|
||||
|
||||
// ------------------------------------------------------------------------------------------
|
||||
// Restoration
|
||||
// ------------------------------------------------------------------------------------------
|
||||
|
||||
type dataBuilderFunc func(id, now, subject, body string) []byte
|
||||
|
||||
func generateAndRestoreItems(
|
||||
ctx context.Context,
|
||||
gc *connector.GraphConnector,
|
||||
acct account.Account,
|
||||
service path.ServiceType,
|
||||
cat path.CategoryType,
|
||||
sel selectors.Selector,
|
||||
userID, destFldr string,
|
||||
howMany int,
|
||||
dbf dataBuilderFunc,
|
||||
) (*details.Details, error) {
|
||||
items := make([]item, 0, howMany)
|
||||
|
||||
for i := 0; i < howMany; i++ {
|
||||
var (
|
||||
now = common.Now()
|
||||
nowLegacy = common.FormatLegacyTime(time.Now())
|
||||
id = uuid.NewString()
|
||||
subject = "automated " + now[:16] + " - " + id[:8]
|
||||
body = "automated " + cat.String() + " generation for " + userID + " at " + now + " - " + id
|
||||
)
|
||||
|
||||
items = append(items, item{
|
||||
name: id,
|
||||
data: dbf(id, nowLegacy, subject, body),
|
||||
})
|
||||
}
|
||||
|
||||
collections := []collection{{
|
||||
pathElements: []string{destFldr},
|
||||
category: cat,
|
||||
items: items,
|
||||
}}
|
||||
|
||||
// TODO: fit the desination to the containers
|
||||
dest := control.DefaultRestoreDestination(common.SimpleTimeTesting)
|
||||
dest.ContainerName = destFldr
|
||||
|
||||
dataColls, err := buildCollections(
|
||||
service,
|
||||
acct.ID(), userID,
|
||||
dest,
|
||||
collections,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
Infof(ctx, "Generating %d %s items in %s\n", howMany, cat, destination)
|
||||
|
||||
return gc.RestoreDataCollections(ctx, acct, sel, dest, dataColls)
|
||||
}
|
||||
|
||||
// ------------------------------------------------------------------------------------------
|
||||
// Common Helpers
|
||||
// ------------------------------------------------------------------------------------------
|
||||
|
||||
func getGCAndVerifyUser(ctx context.Context, userID string) (*connector.GraphConnector, account.Account, error) {
|
||||
tid := common.First(tenant, os.Getenv(account.AzureTenantID))
|
||||
|
||||
// get account info
|
||||
m365Cfg := account.M365Config{
|
||||
M365: credentials.GetM365(),
|
||||
AzureTenantID: tid,
|
||||
}
|
||||
|
||||
acct, err := account.NewAccount(account.ProviderM365, m365Cfg)
|
||||
if err != nil {
|
||||
return nil, account.Account{}, errors.Wrap(err, "finding m365 account details")
|
||||
}
|
||||
|
||||
// build a graph connector
|
||||
gc, err := connector.NewGraphConnector(ctx, acct, connector.Users)
|
||||
if err != nil {
|
||||
return nil, account.Account{}, errors.Wrap(err, "connecting to graph api")
|
||||
}
|
||||
|
||||
normUsers := map[string]struct{}{}
|
||||
|
||||
for k := range gc.Users {
|
||||
normUsers[strings.ToLower(k)] = struct{}{}
|
||||
}
|
||||
|
||||
if _, ok := normUsers[strings.ToLower(user)]; !ok {
|
||||
return nil, account.Account{}, errors.New("user not found within tenant")
|
||||
}
|
||||
|
||||
return gc, acct, nil
|
||||
}
|
||||
|
||||
type item struct {
|
||||
name string
|
||||
data []byte
|
||||
}
|
||||
|
||||
type collection struct {
|
||||
// Elements (in order) for the path representing this collection. Should
|
||||
// only contain elements after the prefix that corso uses for the path. For
|
||||
// example, a collection for the Inbox folder in exchange mail would just be
|
||||
// "Inbox".
|
||||
pathElements []string
|
||||
category path.CategoryType
|
||||
items []item
|
||||
}
|
||||
|
||||
func buildCollections(
|
||||
service path.ServiceType,
|
||||
tenant, user string,
|
||||
dest control.RestoreDestination,
|
||||
colls []collection,
|
||||
) ([]data.Collection, error) {
|
||||
collections := make([]data.Collection, 0, len(colls))
|
||||
|
||||
for _, c := range colls {
|
||||
pth, err := toDataLayerPath(
|
||||
service,
|
||||
tenant,
|
||||
user,
|
||||
c.category,
|
||||
c.pathElements,
|
||||
false,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
mc := mockconnector.NewMockExchangeCollection(pth, len(c.items))
|
||||
|
||||
for i := 0; i < len(c.items); i++ {
|
||||
mc.Names[i] = c.items[i].name
|
||||
mc.Data[i] = c.items[i].data
|
||||
}
|
||||
|
||||
collections = append(collections, mc)
|
||||
}
|
||||
|
||||
return collections, nil
|
||||
}
|
||||
|
||||
func toDataLayerPath(
|
||||
service path.ServiceType,
|
||||
tenant, user string,
|
||||
category path.CategoryType,
|
||||
elements []string,
|
||||
isItem bool,
|
||||
) (path.Path, error) {
|
||||
pb := path.Builder{}.Append(elements...)
|
||||
|
||||
switch service {
|
||||
case path.ExchangeService:
|
||||
return pb.ToDataLayerExchangePathForCategory(tenant, user, category, isItem)
|
||||
case path.OneDriveService:
|
||||
return pb.ToDataLayerOneDrivePath(tenant, user, isItem)
|
||||
}
|
||||
|
||||
return nil, errors.Errorf("unknown service %s", service.String())
|
||||
}
|
||||
|
||||
198
src/cmd/factory/impl/common.go
Normal file
198
src/cmd/factory/impl/common.go
Normal file
@ -0,0 +1,198 @@
|
||||
package impl
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/pkg/errors"
|
||||
|
||||
. "github.com/alcionai/corso/src/cli/print"
|
||||
"github.com/alcionai/corso/src/internal/common"
|
||||
"github.com/alcionai/corso/src/internal/connector"
|
||||
"github.com/alcionai/corso/src/internal/connector/mockconnector"
|
||||
"github.com/alcionai/corso/src/internal/data"
|
||||
"github.com/alcionai/corso/src/pkg/account"
|
||||
"github.com/alcionai/corso/src/pkg/backup/details"
|
||||
"github.com/alcionai/corso/src/pkg/control"
|
||||
"github.com/alcionai/corso/src/pkg/credentials"
|
||||
"github.com/alcionai/corso/src/pkg/path"
|
||||
"github.com/alcionai/corso/src/pkg/selectors"
|
||||
)
|
||||
|
||||
var (
|
||||
Count int
|
||||
Destination string
|
||||
Tenant string
|
||||
User string
|
||||
)
|
||||
|
||||
// TODO: ErrGenerating = errors.New("not all items were successfully generated")
|
||||
|
||||
var ErrNotYetImplemeted = errors.New("not yet implemented")
|
||||
|
||||
// ------------------------------------------------------------------------------------------
|
||||
// Restoration
|
||||
// ------------------------------------------------------------------------------------------
|
||||
|
||||
type dataBuilderFunc func(id, now, subject, body string) []byte
|
||||
|
||||
func generateAndRestoreItems(
|
||||
ctx context.Context,
|
||||
gc *connector.GraphConnector,
|
||||
acct account.Account,
|
||||
service path.ServiceType,
|
||||
cat path.CategoryType,
|
||||
sel selectors.Selector,
|
||||
tenantID, userID, destFldr string,
|
||||
howMany int,
|
||||
dbf dataBuilderFunc,
|
||||
) (*details.Details, error) {
|
||||
items := make([]item, 0, howMany)
|
||||
|
||||
for i := 0; i < howMany; i++ {
|
||||
var (
|
||||
now = common.Now()
|
||||
nowLegacy = common.FormatLegacyTime(time.Now())
|
||||
id = uuid.NewString()
|
||||
subject = "automated " + now[:16] + " - " + id[:8]
|
||||
body = "automated " + cat.String() + " generation for " + userID + " at " + now + " - " + id
|
||||
)
|
||||
|
||||
items = append(items, item{
|
||||
name: id,
|
||||
data: dbf(id, nowLegacy, subject, body),
|
||||
})
|
||||
}
|
||||
|
||||
collections := []collection{{
|
||||
pathElements: []string{destFldr},
|
||||
category: cat,
|
||||
items: items,
|
||||
}}
|
||||
|
||||
// TODO: fit the desination to the containers
|
||||
dest := control.DefaultRestoreDestination(common.SimpleTimeTesting)
|
||||
dest.ContainerName = destFldr
|
||||
|
||||
dataColls, err := buildCollections(
|
||||
service,
|
||||
tenantID, userID,
|
||||
dest,
|
||||
collections,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
Infof(ctx, "Generating %d %s items in %s\n", howMany, cat, Destination)
|
||||
|
||||
return gc.RestoreDataCollections(ctx, acct, sel, dest, dataColls)
|
||||
}
|
||||
|
||||
// ------------------------------------------------------------------------------------------
|
||||
// Common Helpers
|
||||
// ------------------------------------------------------------------------------------------
|
||||
|
||||
func getGCAndVerifyUser(ctx context.Context, userID string) (*connector.GraphConnector, account.Account, error) {
|
||||
tid := common.First(Tenant, os.Getenv(account.AzureTenantID))
|
||||
|
||||
// get account info
|
||||
m365Cfg := account.M365Config{
|
||||
M365: credentials.GetM365(),
|
||||
AzureTenantID: tid,
|
||||
}
|
||||
|
||||
acct, err := account.NewAccount(account.ProviderM365, m365Cfg)
|
||||
if err != nil {
|
||||
return nil, account.Account{}, errors.Wrap(err, "finding m365 account details")
|
||||
}
|
||||
|
||||
// build a graph connector
|
||||
gc, err := connector.NewGraphConnector(ctx, acct, connector.Users)
|
||||
if err != nil {
|
||||
return nil, account.Account{}, errors.Wrap(err, "connecting to graph api")
|
||||
}
|
||||
|
||||
normUsers := map[string]struct{}{}
|
||||
|
||||
for k := range gc.Users {
|
||||
normUsers[strings.ToLower(k)] = struct{}{}
|
||||
}
|
||||
|
||||
if _, ok := normUsers[strings.ToLower(User)]; !ok {
|
||||
return nil, account.Account{}, errors.New("user not found within tenant")
|
||||
}
|
||||
|
||||
return gc, acct, nil
|
||||
}
|
||||
|
||||
type item struct {
|
||||
name string
|
||||
data []byte
|
||||
}
|
||||
|
||||
type collection struct {
|
||||
// Elements (in order) for the path representing this collection. Should
|
||||
// only contain elements after the prefix that corso uses for the path. For
|
||||
// example, a collection for the Inbox folder in exchange mail would just be
|
||||
// "Inbox".
|
||||
pathElements []string
|
||||
category path.CategoryType
|
||||
items []item
|
||||
}
|
||||
|
||||
func buildCollections(
|
||||
service path.ServiceType,
|
||||
tenant, user string,
|
||||
dest control.RestoreDestination,
|
||||
colls []collection,
|
||||
) ([]data.Collection, error) {
|
||||
collections := make([]data.Collection, 0, len(colls))
|
||||
|
||||
for _, c := range colls {
|
||||
pth, err := toDataLayerPath(
|
||||
service,
|
||||
tenant,
|
||||
user,
|
||||
c.category,
|
||||
c.pathElements,
|
||||
false,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
mc := mockconnector.NewMockExchangeCollection(pth, len(c.items))
|
||||
|
||||
for i := 0; i < len(c.items); i++ {
|
||||
mc.Names[i] = c.items[i].name
|
||||
mc.Data[i] = c.items[i].data
|
||||
}
|
||||
|
||||
collections = append(collections, mc)
|
||||
}
|
||||
|
||||
return collections, nil
|
||||
}
|
||||
|
||||
func toDataLayerPath(
|
||||
service path.ServiceType,
|
||||
tenant, user string,
|
||||
category path.CategoryType,
|
||||
elements []string,
|
||||
isItem bool,
|
||||
) (path.Path, error) {
|
||||
pb := path.Builder{}.Append(elements...)
|
||||
|
||||
switch service {
|
||||
case path.ExchangeService:
|
||||
return pb.ToDataLayerExchangePathForCategory(tenant, user, category, isItem)
|
||||
case path.OneDriveService:
|
||||
return pb.ToDataLayerOneDrivePath(tenant, user, isItem)
|
||||
}
|
||||
|
||||
return nil, errors.Errorf("unknown service %s", service.String())
|
||||
}
|
||||
@ -1,4 +1,4 @@
|
||||
package main
|
||||
package impl
|
||||
|
||||
import (
|
||||
"github.com/spf13/cobra"
|
||||
@ -30,7 +30,7 @@ var (
|
||||
}
|
||||
)
|
||||
|
||||
func addExchangeCommands(cmd *cobra.Command) {
|
||||
func AddExchangeCommands(cmd *cobra.Command) {
|
||||
cmd.AddCommand(emailsCmd)
|
||||
cmd.AddCommand(eventsCmd)
|
||||
cmd.AddCommand(contactsCmd)
|
||||
@ -47,7 +47,7 @@ func handleExchangeEmailFactory(cmd *cobra.Command, args []string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
gc, acct, err := getGCAndVerifyUser(ctx, user)
|
||||
gc, acct, err := getGCAndVerifyUser(ctx, User)
|
||||
if err != nil {
|
||||
return Only(ctx, err)
|
||||
}
|
||||
@ -58,12 +58,12 @@ func handleExchangeEmailFactory(cmd *cobra.Command, args []string) error {
|
||||
acct,
|
||||
service,
|
||||
category,
|
||||
selectors.NewExchangeRestore([]string{user}).Selector,
|
||||
user, destination,
|
||||
count,
|
||||
selectors.NewExchangeRestore([]string{User}).Selector,
|
||||
Tenant, User, Destination,
|
||||
Count,
|
||||
func(id, now, subject, body string) []byte {
|
||||
return mockconnector.GetMockMessageWith(
|
||||
user, user, user,
|
||||
User, User, User,
|
||||
subject, body, body,
|
||||
now, now, now, now)
|
||||
},
|
||||
@ -88,7 +88,7 @@ func handleExchangeCalendarEventFactory(cmd *cobra.Command, args []string) error
|
||||
return nil
|
||||
}
|
||||
|
||||
gc, acct, err := getGCAndVerifyUser(ctx, user)
|
||||
gc, acct, err := getGCAndVerifyUser(ctx, User)
|
||||
if err != nil {
|
||||
return Only(ctx, err)
|
||||
}
|
||||
@ -99,12 +99,12 @@ func handleExchangeCalendarEventFactory(cmd *cobra.Command, args []string) error
|
||||
acct,
|
||||
service,
|
||||
category,
|
||||
selectors.NewExchangeRestore([]string{user}).Selector,
|
||||
user, destination,
|
||||
count,
|
||||
selectors.NewExchangeRestore([]string{User}).Selector,
|
||||
Tenant, User, Destination,
|
||||
Count,
|
||||
func(id, now, subject, body string) []byte {
|
||||
return mockconnector.GetMockEventWith(
|
||||
user, subject, body, body,
|
||||
User, subject, body, body,
|
||||
now, now, false)
|
||||
},
|
||||
)
|
||||
@ -128,7 +128,7 @@ func handleExchangeContactFactory(cmd *cobra.Command, args []string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
gc, acct, err := getGCAndVerifyUser(ctx, user)
|
||||
gc, acct, err := getGCAndVerifyUser(ctx, User)
|
||||
if err != nil {
|
||||
return Only(ctx, err)
|
||||
}
|
||||
@ -139,9 +139,9 @@ func handleExchangeContactFactory(cmd *cobra.Command, args []string) error {
|
||||
acct,
|
||||
service,
|
||||
category,
|
||||
selectors.NewExchangeRestore([]string{user}).Selector,
|
||||
user, destination,
|
||||
count,
|
||||
selectors.NewExchangeRestore([]string{User}).Selector,
|
||||
Tenant, User, Destination,
|
||||
Count,
|
||||
func(id, now, subject, body string) []byte {
|
||||
given, mid, sur := id[:8], id[9:13], id[len(id)-12:]
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
package main
|
||||
package impl
|
||||
|
||||
import (
|
||||
"github.com/spf13/cobra"
|
||||
@ -13,7 +13,7 @@ var filesCmd = &cobra.Command{
|
||||
RunE: handleOneDriveFileFactory,
|
||||
}
|
||||
|
||||
func addOneDriveCommands(cmd *cobra.Command) {
|
||||
func AddOneDriveCommands(cmd *cobra.Command) {
|
||||
cmd.AddCommand(filesCmd)
|
||||
}
|
||||
|
||||
@ -5,11 +5,11 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/microsoft/kiota-abstractions-go/serialization"
|
||||
kw "github.com/microsoft/kiota-serialization-json-go"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/spf13/cobra"
|
||||
@ -18,12 +18,11 @@ import (
|
||||
"github.com/alcionai/corso/src/cli/utils"
|
||||
"github.com/alcionai/corso/src/internal/common"
|
||||
"github.com/alcionai/corso/src/internal/connector"
|
||||
"github.com/alcionai/corso/src/internal/connector/exchange"
|
||||
"github.com/alcionai/corso/src/internal/connector/exchange/api"
|
||||
"github.com/alcionai/corso/src/internal/connector/graph"
|
||||
"github.com/alcionai/corso/src/internal/connector/support"
|
||||
"github.com/alcionai/corso/src/internal/data"
|
||||
"github.com/alcionai/corso/src/pkg/account"
|
||||
"github.com/alcionai/corso/src/pkg/backup/details"
|
||||
"github.com/alcionai/corso/src/pkg/credentials"
|
||||
"github.com/alcionai/corso/src/pkg/logger"
|
||||
"github.com/alcionai/corso/src/pkg/path"
|
||||
@ -77,12 +76,12 @@ func handleGetCommand(cmd *cobra.Command, args []string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
gc, creds, err := getGC(ctx)
|
||||
_, creds, err := getGC(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = runDisplayM365JSON(ctx, gc.Service, creds)
|
||||
err = runDisplayM365JSON(ctx, creds, user, m365ID)
|
||||
if err != nil {
|
||||
return Only(ctx, errors.Wrapf(err, "unable to create mock from M365: %s", m365ID))
|
||||
}
|
||||
@ -92,13 +91,14 @@ func handleGetCommand(cmd *cobra.Command, args []string) error {
|
||||
|
||||
func runDisplayM365JSON(
|
||||
ctx context.Context,
|
||||
gs graph.Servicer,
|
||||
creds account.M365Config,
|
||||
user, itemID string,
|
||||
) error {
|
||||
var (
|
||||
get api.GraphRetrievalFunc
|
||||
serializeFunc exchange.GraphSerializeFunc
|
||||
bs []byte
|
||||
err error
|
||||
cat = graph.StringToPathCategory(category)
|
||||
sw = kw.NewJsonSerializationWriter()
|
||||
)
|
||||
|
||||
ac, err := api.NewClient(creds)
|
||||
@ -107,58 +107,60 @@ func runDisplayM365JSON(
|
||||
}
|
||||
|
||||
switch cat {
|
||||
case path.EmailCategory, path.EventsCategory, path.ContactsCategory:
|
||||
get, serializeFunc = exchange.GetQueryAndSerializeFunc(ac, cat)
|
||||
case path.EmailCategory:
|
||||
bs, err = getItem(ctx, ac.Mail(), user, itemID)
|
||||
case path.EventsCategory:
|
||||
bs, err = getItem(ctx, ac.Events(), user, itemID)
|
||||
case path.ContactsCategory:
|
||||
bs, err = getItem(ctx, ac.Contacts(), user, itemID)
|
||||
default:
|
||||
return fmt.Errorf("unable to process category: %s", cat)
|
||||
}
|
||||
|
||||
channel := make(chan data.Stream, 1)
|
||||
|
||||
response, err := get(ctx, user, m365ID)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, support.ConnectorStackErrorTrace(err))
|
||||
}
|
||||
|
||||
// First return is the number of bytes that were serialized. Ignored
|
||||
_, err = serializeFunc(ctx, gs, channel, response, user)
|
||||
close(channel)
|
||||
str := string(bs)
|
||||
|
||||
err = sw.WriteStringValue("", &str)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
sw := kw.NewJsonSerializationWriter()
|
||||
|
||||
for item := range channel {
|
||||
buf := &bytes.Buffer{}
|
||||
|
||||
_, err := buf.ReadFrom(item.ToReader())
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "unable to parse given data: %s", m365ID)
|
||||
}
|
||||
|
||||
byteArray := buf.Bytes()
|
||||
newValue := string(byteArray)
|
||||
|
||||
err = sw.WriteStringValue("", &newValue)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "unable to %s to string value", m365ID)
|
||||
return errors.Wrapf(err, "unable to %s to string value", itemID)
|
||||
}
|
||||
|
||||
array, err := sw.GetSerializedContent()
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "unable to serialize new value from M365:%s", m365ID)
|
||||
return errors.Wrapf(err, "unable to serialize new value from M365:%s", itemID)
|
||||
}
|
||||
|
||||
fmt.Println(string(array))
|
||||
|
||||
//lint:ignore SA4004 only expecting one item
|
||||
return nil
|
||||
}
|
||||
|
||||
type itemer interface {
|
||||
GetItem(
|
||||
ctx context.Context,
|
||||
user, itemID string,
|
||||
) (serialization.Parsable, *details.ExchangeInfo, error)
|
||||
Serialize(
|
||||
ctx context.Context,
|
||||
item serialization.Parsable,
|
||||
user, itemID string,
|
||||
) ([]byte, error)
|
||||
}
|
||||
|
||||
func getItem(
|
||||
ctx context.Context,
|
||||
itm itemer,
|
||||
user, itemID string,
|
||||
) ([]byte, error) {
|
||||
sp, _, err := itm.GetItem(ctx, user, itemID)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "getting item")
|
||||
}
|
||||
|
||||
// This should never happen
|
||||
return errors.New("m365 object not serialized")
|
||||
return itm.Serialize(ctx, sp, user, itemID)
|
||||
}
|
||||
|
||||
//-------------------------------------------------------------------------------
|
||||
|
||||
27
src/go.mod
27
src/go.mod
@ -2,13 +2,15 @@ module github.com/alcionai/corso/src
|
||||
|
||||
go 1.19
|
||||
|
||||
replace github.com/kopia/kopia => github.com/alcionai/kopia v0.10.8-0.20230112200734-ac706ef83a1c
|
||||
|
||||
require (
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.2.0
|
||||
github.com/aws/aws-sdk-go v1.44.176
|
||||
github.com/aws/aws-sdk-go v1.44.181
|
||||
github.com/aws/aws-xray-sdk-go v1.8.0
|
||||
github.com/google/uuid v1.3.0
|
||||
github.com/hashicorp/go-multierror v1.1.1
|
||||
github.com/kopia/kopia v0.12.0
|
||||
github.com/kopia/kopia v0.12.2-0.20221229232524-ba938cf58cc8
|
||||
github.com/microsoft/kiota-abstractions-go v0.15.2
|
||||
github.com/microsoft/kiota-authentication-azure-go v0.5.0
|
||||
github.com/microsoft/kiota-http-go v0.11.0
|
||||
@ -17,6 +19,7 @@ require (
|
||||
github.com/microsoftgraph/msgraph-sdk-go-core v0.31.1
|
||||
github.com/pkg/errors v0.9.1
|
||||
github.com/rudderlabs/analytics-go v3.3.3+incompatible
|
||||
github.com/spatialcurrent/go-lazy v0.0.0-20211115014721-47315cc003d1
|
||||
github.com/spf13/cobra v1.6.1
|
||||
github.com/spf13/pflag v1.0.5
|
||||
github.com/spf13/viper v1.14.0
|
||||
@ -34,6 +37,7 @@ require (
|
||||
github.com/VividCortex/ewma v1.2.0 // indirect
|
||||
github.com/acarl005/stripansi v0.0.0-20180116102854-5a71ef0e047d // indirect
|
||||
github.com/andybalholm/brotli v1.0.4 // indirect
|
||||
github.com/dnaeon/go-vcr v1.2.0 // indirect
|
||||
github.com/fsnotify/fsnotify v1.6.0 // indirect
|
||||
github.com/hashicorp/hcl v1.0.0 // indirect
|
||||
github.com/magiconair/properties v1.8.6 // indirect
|
||||
@ -47,6 +51,7 @@ require (
|
||||
github.com/subosito/gotenv v1.4.1 // indirect
|
||||
github.com/valyala/bytebufferpool v1.0.0 // indirect
|
||||
github.com/valyala/fasthttp v1.34.0 // indirect
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c // indirect
|
||||
gopkg.in/yaml.v2 v2.4.0 // indirect
|
||||
)
|
||||
|
||||
@ -60,7 +65,7 @@ require (
|
||||
github.com/chmduquesne/rollinghash v4.0.0+incompatible // indirect
|
||||
github.com/cjlapao/common-go v0.0.37 // indirect
|
||||
github.com/davecgh/go-spew v1.1.1 // indirect
|
||||
github.com/dustin/go-humanize v1.0.0
|
||||
github.com/dustin/go-humanize v1.0.1
|
||||
github.com/edsrzf/mmap-go v1.1.0 // indirect
|
||||
github.com/go-logr/logr v1.2.3 // indirect
|
||||
github.com/go-logr/stdr v1.2.2 // indirect
|
||||
@ -72,10 +77,10 @@ require (
|
||||
github.com/inhies/go-bytesize v0.0.0-20220417184213-4913239db9cf
|
||||
github.com/jmespath/go-jmespath v0.4.0 // indirect
|
||||
github.com/json-iterator/go v1.1.12 // indirect
|
||||
github.com/klauspost/compress v1.15.11 // indirect
|
||||
github.com/klauspost/compress v1.15.12 // indirect
|
||||
github.com/klauspost/cpuid/v2 v2.1.1 // indirect
|
||||
github.com/klauspost/pgzip v1.2.5 // indirect
|
||||
github.com/klauspost/reedsolomon v1.11.0 // indirect
|
||||
github.com/klauspost/reedsolomon v1.11.3 // indirect
|
||||
github.com/kylelemons/godebug v1.1.0 // indirect
|
||||
github.com/mattn/go-colorable v0.1.13 // indirect
|
||||
github.com/mattn/go-isatty v0.0.16 // indirect
|
||||
@ -84,7 +89,7 @@ require (
|
||||
github.com/mgutz/ansi v0.0.0-20200706080929-d51e80ef957d // indirect
|
||||
github.com/microsoft/kiota-serialization-text-go v0.6.0 // indirect
|
||||
github.com/minio/md5-simd v1.1.2 // indirect
|
||||
github.com/minio/minio-go/v7 v7.0.39 // indirect
|
||||
github.com/minio/minio-go/v7 v7.0.45 // indirect
|
||||
github.com/minio/sha256-simd v1.0.0 // indirect
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
|
||||
github.com/modern-go/reflect2 v1.0.2 // indirect
|
||||
@ -92,8 +97,8 @@ require (
|
||||
github.com/pierrec/lz4 v2.6.1+incompatible // indirect
|
||||
github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.0 // indirect
|
||||
github.com/prometheus/client_golang v1.13.0 // indirect
|
||||
github.com/prometheus/client_model v0.2.0 // indirect
|
||||
github.com/prometheus/client_golang v1.14.0 // indirect
|
||||
github.com/prometheus/client_model v0.3.0 // indirect
|
||||
github.com/prometheus/common v0.37.0 // indirect
|
||||
github.com/prometheus/procfs v0.8.0 // indirect
|
||||
github.com/rivo/uniseg v0.2.0 // indirect
|
||||
@ -109,14 +114,14 @@ require (
|
||||
go.opentelemetry.io/otel/trace v1.11.2 // indirect
|
||||
go.uber.org/atomic v1.10.0 // indirect
|
||||
go.uber.org/multierr v1.8.0 // indirect
|
||||
golang.org/x/crypto v0.1.0 // indirect
|
||||
golang.org/x/crypto v0.3.0 // indirect
|
||||
golang.org/x/mod v0.7.0 // indirect
|
||||
golang.org/x/net v0.5.0 // indirect
|
||||
golang.org/x/sync v0.1.0 // indirect
|
||||
golang.org/x/sys v0.4.0 // indirect
|
||||
golang.org/x/text v0.6.0 // indirect
|
||||
google.golang.org/genproto v0.0.0-20221024183307-1bc688fe9f3e // indirect
|
||||
google.golang.org/grpc v1.50.1 // indirect
|
||||
google.golang.org/genproto v0.0.0-20221206210731-b1a01be3a5f6 // indirect
|
||||
google.golang.org/grpc v1.51.0 // indirect
|
||||
google.golang.org/protobuf v1.28.1 // indirect
|
||||
gopkg.in/ini.v1 v1.67.0 // indirect
|
||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||
|
||||
63
src/go.sum
63
src/go.sum
@ -47,19 +47,23 @@ github.com/AzureAD/microsoft-authentication-library-for-go v0.7.0/go.mod h1:BDJ5
|
||||
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
||||
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
|
||||
github.com/DATA-DOG/go-sqlmock v1.4.1 h1:ThlnYciV1iM/V0OSF/dtkqWb6xo5qITT1TJBG1MRDJM=
|
||||
github.com/GehirnInc/crypt v0.0.0-20200316065508-bb7000b8a962 h1:KeNholpO2xKjgaaSyd+DyQRrsQjhbSeS7qe4nEw8aQw=
|
||||
github.com/VividCortex/ewma v1.2.0 h1:f58SaIzcDXrSy3kWaHNvuJgJ3Nmz59Zji6XoJR/q1ow=
|
||||
github.com/VividCortex/ewma v1.2.0/go.mod h1:nz4BbCtbLyFDeC9SUHbtcT5644juEuWfUAUnGx7j5l4=
|
||||
github.com/acarl005/stripansi v0.0.0-20180116102854-5a71ef0e047d h1:licZJFw2RwpHMqeKTCYkitsPqHNxTmd4SNR5r94FGM8=
|
||||
github.com/acarl005/stripansi v0.0.0-20180116102854-5a71ef0e047d/go.mod h1:asat636LX7Bqt5lYEZ27JNDcqxfjdBQuJ/MM4CN/Lzo=
|
||||
github.com/alcionai/kopia v0.10.8-0.20230112200734-ac706ef83a1c h1:uUcdEZ4sz7kRYVWB3K49MBHdICRyXCVAzd4ZiY3lvo0=
|
||||
github.com/alcionai/kopia v0.10.8-0.20230112200734-ac706ef83a1c/go.mod h1:yzJV11S6N6XMboXt7oCO6Jy2jJHPeSMtA+KOJ9Y1548=
|
||||
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
|
||||
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
|
||||
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
|
||||
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
|
||||
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho=
|
||||
github.com/alessio/shellescape v1.4.1 h1:V7yhSDDn8LP4lc4jS8pFkt0zCnzVJlG5JXy9BVKJUX0=
|
||||
github.com/andybalholm/brotli v1.0.4 h1:V7DdXeJtZscaqfNuAdSRuRFzuiKlHSC/Zh3zl9qY3JY=
|
||||
github.com/andybalholm/brotli v1.0.4/go.mod h1:fO7iG3H7G2nSZ7m0zPUDn85XEX2GTukHGRSepvi9Eig=
|
||||
github.com/aws/aws-sdk-go v1.44.176 h1:mxcfI3IWHemX+5fEKt5uqIS/hdbaR7qzGfJYo5UyjJE=
|
||||
github.com/aws/aws-sdk-go v1.44.176/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI=
|
||||
github.com/aws/aws-sdk-go v1.44.181 h1:w4OzE8bwIVo62gUTAp/uEFO2HSsUtf1pjXpSs36cluY=
|
||||
github.com/aws/aws-sdk-go v1.44.181/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI=
|
||||
github.com/aws/aws-xray-sdk-go v1.8.0 h1:0xncHZ588wB/geLjbM/esoW3FOEThWy2TJyb4VXfLFY=
|
||||
github.com/aws/aws-xray-sdk-go v1.8.0/go.mod h1:7LKe47H+j3evfvS1+q0wzpoaGXGrF3mUsfM+thqVO+A=
|
||||
github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8=
|
||||
@ -85,12 +89,14 @@ github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGX
|
||||
github.com/cncf/udpa/go v0.0.0-20200629203442-efcf912fb354/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
|
||||
github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
|
||||
github.com/danieljoos/wincred v1.1.2 h1:QLdCxFs1/Yl4zduvBdcHB8goaYk9RARS2SgLLRuAyr0=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/dnaeon/go-vcr v1.1.0 h1:ReYa/UBrRyQdant9B4fNHGoCNKw6qh6P0fsdGmZpR7c=
|
||||
github.com/dustin/go-humanize v1.0.0 h1:VSnTsYCnlFHaM2/igO1h6X3HA71jcobQuxemgkq4zYo=
|
||||
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
|
||||
github.com/dnaeon/go-vcr v1.2.0 h1:zHCHvJYTMh1N7xnV7zf1m1GPBF9Ad0Jk/whtQ1663qI=
|
||||
github.com/dnaeon/go-vcr v1.2.0/go.mod h1:R4UdLID7HZT3taECzJs4YgbbH6PIGXB6W/sc5OLb6RQ=
|
||||
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
|
||||
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
|
||||
github.com/edsrzf/mmap-go v1.1.0 h1:6EUwBLQ/Mcr1EYLE4Tn1VdW1A4ckqCQWZBw8Hr0kjpQ=
|
||||
github.com/edsrzf/mmap-go v1.1.0/go.mod h1:19H/e8pUPLicwkyNgOykDXkJ9F0MHE+Z52B8EIth78Q=
|
||||
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
|
||||
@ -119,6 +125,8 @@ github.com/go-logr/logr v1.2.3/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbV
|
||||
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
|
||||
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
|
||||
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
|
||||
github.com/godbus/dbus/v5 v5.0.6 h1:mkgN1ofwASrYnJ5W6U/BxG15eXXXjirgZc7CLqkcaro=
|
||||
github.com/gofrs/flock v0.8.1 h1:+gYjHKf32LDeiEEFhQaotPbLuUXjY5ZqxKgXy7n59aw=
|
||||
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
|
||||
github.com/golang-jwt/jwt/v4 v4.4.2 h1:rcc4lwaZgFMCZ5jxF9ABolDcIHdBytAFgqFPbSJQAYs=
|
||||
github.com/golang-jwt/jwt/v4 v4.4.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
|
||||
@ -184,7 +192,9 @@ github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+
|
||||
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
|
||||
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
|
||||
github.com/googleapis/google-cloud-go-testing v0.0.0-20200911160855-bcd43fbb19e8/go.mod h1:dvDLG8qkwmyD9a/MJJN3XJcT3xFxOKAvTZGvuZmac9g=
|
||||
github.com/gorilla/mux v1.8.0 h1:i40aqfkR1h2SlN9hojwV5ZA91wcXFOvkdNIeFDP5koI=
|
||||
github.com/grpc-ecosystem/go-grpc-middleware v1.3.0 h1:+9834+KizmvFV7pXQGSXQTsaWhq2GjuNUt0aUU0YBYw=
|
||||
github.com/hanwen/go-fuse/v2 v2.1.1-0.20220112183258-f57e95bda82d h1:ibbzF2InxMOS+lLCphY9PHNKPURDUBNKaG6ErSq8gJQ=
|
||||
github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA=
|
||||
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
|
||||
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
|
||||
@ -217,8 +227,8 @@ github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7V
|
||||
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
|
||||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||
github.com/klauspost/compress v1.15.0/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
|
||||
github.com/klauspost/compress v1.15.11 h1:Lcadnb3RKGin4FYM/orgq0qde+nc15E5Cbqg4B9Sx9c=
|
||||
github.com/klauspost/compress v1.15.11/go.mod h1:QPwzmACJjUTFsnSHH934V6woptycfrDDJnH7hvFVbGM=
|
||||
github.com/klauspost/compress v1.15.12 h1:YClS/PImqYbn+UILDnqxQCZ3RehC9N318SU3kElDUEM=
|
||||
github.com/klauspost/compress v1.15.12/go.mod h1:QPwzmACJjUTFsnSHH934V6woptycfrDDJnH7hvFVbGM=
|
||||
github.com/klauspost/cpuid/v2 v2.0.1/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
|
||||
github.com/klauspost/cpuid/v2 v2.0.4/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
|
||||
github.com/klauspost/cpuid/v2 v2.0.12/go.mod h1:g2LTdtYhdyuGPqyWyv7qRAmj1WBqxuObKfj5c0PQa7c=
|
||||
@ -226,15 +236,15 @@ github.com/klauspost/cpuid/v2 v2.1.1 h1:t0wUqjowdm8ezddV5k0tLWVklVuvLJpoHeb4WBdy
|
||||
github.com/klauspost/cpuid/v2 v2.1.1/go.mod h1:RVVoqg1df56z8g3pUjL/3lE5UfnlrJX8tyFgg4nqhuY=
|
||||
github.com/klauspost/pgzip v1.2.5 h1:qnWYvvKqedOF2ulHpMG72XQol4ILEJ8k2wwRl/Km8oE=
|
||||
github.com/klauspost/pgzip v1.2.5/go.mod h1:Ch1tH69qFZu15pkjo5kYi6mth2Zzwzt50oCQKQE9RUs=
|
||||
github.com/klauspost/reedsolomon v1.11.0 h1:fc24kMFf4I6dXJwSkVAsw8Za/dMcJrV5ImeDjG3ss1M=
|
||||
github.com/klauspost/reedsolomon v1.11.0/go.mod h1:FXLZzlJIdfqEnQLdUKWNRuMZg747hZ4oYp2Ml60Lb/k=
|
||||
github.com/klauspost/reedsolomon v1.11.3 h1:rX9UNNvDhJ0Bq45y6uBy/eYehcjyz5faokTuZmu1Q9U=
|
||||
github.com/klauspost/reedsolomon v1.11.3/go.mod h1:FXLZzlJIdfqEnQLdUKWNRuMZg747hZ4oYp2Ml60Lb/k=
|
||||
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||
github.com/kopia/kopia v0.12.0 h1:8Pj7Q7Pn1hoDdzmHX6rryfO0f/3AAEy/f5xW2itVHIo=
|
||||
github.com/kopia/kopia v0.12.0/go.mod h1:pkf8YKBD69IEb/2X/D8jddYaJSb1eXQCtK4kiMa+BIc=
|
||||
github.com/kopia/htmluibuild v0.0.0-20220928042710-9fdd02afb1e7 h1:WP5VfIQL7AaYkO4zTNSCsVOawTzudbc4tvLojvg0RKc=
|
||||
github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg=
|
||||
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
|
||||
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
||||
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
|
||||
github.com/kr/pretty v0.3.0 h1:WgNl7dwNpEZ6jJ9k1snq4pZsg7DOEN8hP9Xw0Tsjwk0=
|
||||
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||
@ -275,8 +285,8 @@ github.com/microsoftgraph/msgraph-sdk-go-core v0.31.1 h1:aVvnO5l8qLCEcvELc5n9grt
|
||||
github.com/microsoftgraph/msgraph-sdk-go-core v0.31.1/go.mod h1:RE4F2qGCTehGtQGc9Txafc4l+XMpbjYuO4amDLFgOWE=
|
||||
github.com/minio/md5-simd v1.1.2 h1:Gdi1DZK69+ZVMoNHRXJyNcxrMA4dSxoYHZSQbirFg34=
|
||||
github.com/minio/md5-simd v1.1.2/go.mod h1:MzdKDxYpY2BT9XQFocsiZf/NKVtR7nkE4RoEpN+20RM=
|
||||
github.com/minio/minio-go/v7 v7.0.39 h1:upnbu1jCGOqEvrGSpRauSN9ZG7RCHK7VHxXS8Vmg2zk=
|
||||
github.com/minio/minio-go/v7 v7.0.39/go.mod h1:nCrRzjoSUQh8hgKKtu3Y708OLvRLtuASMg2/nvmbarw=
|
||||
github.com/minio/minio-go/v7 v7.0.45 h1:g4IeM9M9pW/Lo8AGGNOjBZYlvmtlE1N5TQEYWXRWzIs=
|
||||
github.com/minio/minio-go/v7 v7.0.45/go.mod h1:nCrRzjoSUQh8hgKKtu3Y708OLvRLtuASMg2/nvmbarw=
|
||||
github.com/minio/sha256-simd v1.0.0 h1:v1ta+49hkWZyvaKwrQB8elexRqm6Y0aMLjCNsrYxo6g=
|
||||
github.com/minio/sha256-simd v1.0.0/go.mod h1:OuYzVNI5vcoYIAmbIvHPl3N3jUzVedXbKy5RFepssQM=
|
||||
github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=
|
||||
@ -288,6 +298,7 @@ github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lN
|
||||
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
||||
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
|
||||
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
|
||||
github.com/modocache/gover v0.0.0-20171022184752-b58185e213c5/go.mod h1:caMODM3PzxT8aQXRPkAt8xlV/e7d7w8GM5g0fa5F0D8=
|
||||
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
|
||||
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
|
||||
github.com/natefinch/atomic v1.0.1 h1:ZPYKxkqQOx3KZ+RsbnP/YsgvxWQPGxjC0oBt2AhwV0A=
|
||||
@ -312,13 +323,14 @@ github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5Fsn
|
||||
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
|
||||
github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
|
||||
github.com/prometheus/client_golang v1.12.1/go.mod h1:3Z9XVyYiZYEO+YQWt3RD2R3jrbd179Rt297l4aS6nDY=
|
||||
github.com/prometheus/client_golang v1.13.0 h1:b71QUfeo5M8gq2+evJdTPfZhYMAU0uKPkyPJ7TPsloU=
|
||||
github.com/prometheus/client_golang v1.13.0/go.mod h1:vTeo+zgvILHsnnj/39Ou/1fPN5nJFOEMgftOUOmlvYQ=
|
||||
github.com/prometheus/client_golang v1.14.0 h1:nJdhIvne2eSX/XRAFV9PcvFFRbrjbcTUj0VP62TMhnw=
|
||||
github.com/prometheus/client_golang v1.14.0/go.mod h1:8vpkKitgIVNcqrRBWh1C4TIUQgYNtG/XQE4E/Zae36Y=
|
||||
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
|
||||
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
github.com/prometheus/client_model v0.2.0 h1:uq5h0d+GuxiXLJLNABMgp2qUWDPiLvgCzz2dUR+/W/M=
|
||||
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
github.com/prometheus/client_model v0.3.0 h1:UBgGFHqYdG/TPFD1B1ogZywDqEkwp3fBMvqdiQ7Xew4=
|
||||
github.com/prometheus/client_model v0.3.0/go.mod h1:LDGWKZIo7rky3hgvBe+caln+Dr3dPggB5dvjtD7w9+w=
|
||||
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
|
||||
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
|
||||
github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc=
|
||||
@ -348,6 +360,8 @@ github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6Mwd
|
||||
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
|
||||
github.com/sirupsen/logrus v1.9.0 h1:trlNQbNUG3OdDrDil03MCb1H2o9nJ1x4/5LYw7byDE0=
|
||||
github.com/sirupsen/logrus v1.9.0/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
|
||||
github.com/spatialcurrent/go-lazy v0.0.0-20211115014721-47315cc003d1 h1:lQ3JvmcVO1/AMFbabvUSJ4YtJRpEAX9Qza73p5j03sw=
|
||||
github.com/spatialcurrent/go-lazy v0.0.0-20211115014721-47315cc003d1/go.mod h1:4aKqcbhASNqjbrG0h9BmkzcWvPJGxbef4B+j0XfFrZo=
|
||||
github.com/spf13/afero v1.9.2 h1:j49Hj62F0n+DaZ1dDCvhABaPNSGNkt32oRFxI33IEMw=
|
||||
github.com/spf13/afero v1.9.2/go.mod h1:iUV7ddyEEZPO5gA3zD4fJt6iStLlL+Lg4m2cihcDf8Y=
|
||||
github.com/spf13/cast v1.5.0 h1:rj3WzYc11XZaIZMPKmwP96zkFEnnAmV8s6XbB2aY32w=
|
||||
@ -375,6 +389,7 @@ github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKs
|
||||
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
||||
github.com/subosito/gotenv v1.4.1 h1:jyEFiXpy21Wm81FBN71l9VoMMV8H8jG+qIK3GCpY6Qs=
|
||||
github.com/subosito/gotenv v1.4.1/go.mod h1:ayKnFf/c6rvx/2iiLrJUk1e6plDbT3edrFNGqEflhK0=
|
||||
github.com/tg123/go-htpasswd v1.2.0 h1:UKp34m9H467/xklxUxU15wKRru7fwXoTojtxg25ITF0=
|
||||
github.com/tidwall/gjson v1.14.3 h1:9jvXn7olKEHU1S9vwoMGliaT8jq1vJ7IH/n9zD9Dnlw=
|
||||
github.com/tidwall/gjson v1.14.3/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
|
||||
github.com/tidwall/match v1.1.1 h1:+Ho715JplO36QYgwN9PGYNhgZvoUSc9X2c80KVTi+GA=
|
||||
@ -400,6 +415,7 @@ github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9de
|
||||
github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
|
||||
github.com/zalando/go-keyring v0.2.1 h1:MBRN/Z8H4U5wEKXiD67YbDAr5cj/DOStmSga70/2qKc=
|
||||
github.com/zeebo/assert v1.1.0 h1:hU1L1vLTHsnO8x8c9KAR5GmM5QscxHg5RNU5z5qbUWY=
|
||||
github.com/zeebo/assert v1.1.0/go.mod h1:Pq9JiuJQpG8JLJdtkwrJESF0Foym2/D9XMU5ciN/wJ0=
|
||||
github.com/zeebo/blake3 v0.2.3 h1:TFoLXsjeXqRNFxSbk35Dk4YtszE/MQQGK10BH4ptoTg=
|
||||
@ -434,8 +450,8 @@ golang.org/x/crypto v0.0.0-20210421170649-83a5a9bb288b/go.mod h1:T9bdIzuCu7OtxOm
|
||||
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||
golang.org/x/crypto v0.0.0-20211108221036-ceb1ce70b4fa/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||
golang.org/x/crypto v0.0.0-20220214200702-86341886e292/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
|
||||
golang.org/x/crypto v0.1.0 h1:MDRAIl0xIo9Io2xV565hzXHw3zVseKrJKodhohM5CjU=
|
||||
golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw=
|
||||
golang.org/x/crypto v0.3.0 h1:a06MkbcxBrEFc0w0QIZWXrH/9cCX6KJyWbBOIwAn+7A=
|
||||
golang.org/x/crypto v0.3.0/go.mod h1:hebNnKkNXi2UzZN1eVRvBB7co0a+JxK6XbPiWVs/3J4=
|
||||
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
|
||||
@ -731,8 +747,8 @@ google.golang.org/genproto v0.0.0-20201210142538-e3217bee35cc/go.mod h1:FWY/as6D
|
||||
google.golang.org/genproto v0.0.0-20201214200347-8c77b98c765d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/genproto v0.0.0-20210108203827-ffc7fda8c3d7/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/genproto v0.0.0-20210226172003-ab064af71705/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/genproto v0.0.0-20221024183307-1bc688fe9f3e h1:S9GbmC1iCgvbLyAokVCwiO6tVIrU9Y7c5oMx1V/ki/Y=
|
||||
google.golang.org/genproto v0.0.0-20221024183307-1bc688fe9f3e/go.mod h1:9qHF0xnpdSfF6knlcsnpzUu5y+rpwgbvsyGAZPBMg4s=
|
||||
google.golang.org/genproto v0.0.0-20221206210731-b1a01be3a5f6 h1:AGXp12e/9rItf6/4QymU7WsAUwCf+ICW75cuR91nJIc=
|
||||
google.golang.org/genproto v0.0.0-20221206210731-b1a01be3a5f6/go.mod h1:1dOng4TWOomJrDGhpXjfCD35wQC6jnC7HpRmOFRqEV0=
|
||||
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
||||
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
|
||||
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
|
||||
@ -749,8 +765,8 @@ google.golang.org/grpc v1.31.1/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM
|
||||
google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
|
||||
google.golang.org/grpc v1.34.0/go.mod h1:WotjhfgOW/POjDeRt8vscBtXq+2VjORFy659qA51WJ8=
|
||||
google.golang.org/grpc v1.35.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
|
||||
google.golang.org/grpc v1.50.1 h1:DS/BukOZWp8s6p4Dt/tOaJaTQyPyOoCcrjroHuCeLzY=
|
||||
google.golang.org/grpc v1.50.1/go.mod h1:ZgQEeidpAuNRZ8iRrlBKXZQP1ghovWIVhdJRyCDK+GI=
|
||||
google.golang.org/grpc v1.51.0 h1:E1eGv1FTqoLIdnBCZufiSHgKjlqG6fKFf6pPWtMTh8U=
|
||||
google.golang.org/grpc v1.51.0/go.mod h1:wgNDFcnuBGmxLKI/qn4T+m5BtEBYXJPvibbUPsAIPww=
|
||||
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
|
||||
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
|
||||
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
|
||||
@ -768,8 +784,9 @@ google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqw
|
||||
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
|
||||
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
|
||||
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
|
||||
gopkg.in/ini.v1 v1.67.0 h1:Dgnx+6+nfE+IfzjUEISNeydPJh9AXNNsWbGP9KzCsOA=
|
||||
gopkg.in/ini.v1 v1.67.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
|
||||
|
||||
@ -7,7 +7,9 @@ import (
|
||||
|
||||
"github.com/pkg/errors"
|
||||
|
||||
"github.com/alcionai/corso/src/internal/connector/discovery"
|
||||
"github.com/alcionai/corso/src/internal/connector/exchange"
|
||||
"github.com/alcionai/corso/src/internal/connector/graph"
|
||||
"github.com/alcionai/corso/src/internal/connector/onedrive"
|
||||
"github.com/alcionai/corso/src/internal/connector/sharepoint"
|
||||
"github.com/alcionai/corso/src/internal/connector/support"
|
||||
@ -15,6 +17,7 @@ import (
|
||||
D "github.com/alcionai/corso/src/internal/diagnostics"
|
||||
"github.com/alcionai/corso/src/pkg/control"
|
||||
"github.com/alcionai/corso/src/pkg/logger"
|
||||
"github.com/alcionai/corso/src/pkg/path"
|
||||
"github.com/alcionai/corso/src/pkg/selectors"
|
||||
)
|
||||
|
||||
@ -41,6 +44,15 @@ func (gc *GraphConnector) DataCollections(
|
||||
return nil, err
|
||||
}
|
||||
|
||||
serviceEnabled, err := checkServiceEnabled(ctx, gc.Service, path.ServiceType(sels.Service), sels.DiscreteOwner)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if !serviceEnabled {
|
||||
return []data.Collection{}, nil
|
||||
}
|
||||
|
||||
switch sels.Service {
|
||||
case selectors.ServiceExchange:
|
||||
colls, err := exchange.DataCollections(
|
||||
@ -124,6 +136,29 @@ func verifyBackupInputs(sels selectors.Selector, userPNs, siteIDs []string) erro
|
||||
return nil
|
||||
}
|
||||
|
||||
func checkServiceEnabled(
|
||||
ctx context.Context,
|
||||
gs graph.Servicer,
|
||||
service path.ServiceType,
|
||||
resource string,
|
||||
) (bool, error) {
|
||||
if service == path.SharePointService {
|
||||
// No "enabled" check required for sharepoint
|
||||
return true, nil
|
||||
}
|
||||
|
||||
_, info, err := discovery.User(ctx, gs, resource)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
if _, ok := info.DiscoveredServices[service]; !ok {
|
||||
return false, nil
|
||||
}
|
||||
|
||||
return true, nil
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// OneDrive
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
@ -13,6 +13,7 @@ import (
|
||||
"github.com/alcionai/corso/src/internal/connector/sharepoint"
|
||||
"github.com/alcionai/corso/src/internal/tester"
|
||||
"github.com/alcionai/corso/src/pkg/control"
|
||||
"github.com/alcionai/corso/src/pkg/path"
|
||||
"github.com/alcionai/corso/src/pkg/selectors"
|
||||
)
|
||||
|
||||
@ -303,9 +304,7 @@ func (suite *ConnectorCreateSharePointCollectionIntegrationSuite) SetupSuite() {
|
||||
tester.LogTimeOfTest(suite.T())
|
||||
}
|
||||
|
||||
// TestCreateSharePointCollection. Ensures the proper amount of collections are created based
|
||||
// on the selector.
|
||||
func (suite *ConnectorCreateSharePointCollectionIntegrationSuite) TestCreateSharePointCollection() {
|
||||
func (suite *ConnectorCreateSharePointCollectionIntegrationSuite) TestCreateSharePointCollection_Libraries() {
|
||||
ctx, flush := tester.NewContext()
|
||||
defer flush()
|
||||
|
||||
@ -316,51 +315,46 @@ func (suite *ConnectorCreateSharePointCollectionIntegrationSuite) TestCreateShar
|
||||
siteIDs = []string{siteID}
|
||||
)
|
||||
|
||||
tables := []struct {
|
||||
name string
|
||||
sel func() selectors.Selector
|
||||
comparator assert.ComparisonAssertionFunc
|
||||
}{
|
||||
{
|
||||
name: "SharePoint.Libraries",
|
||||
comparator: assert.Equal,
|
||||
sel: func() selectors.Selector {
|
||||
sel := selectors.NewSharePointBackup(siteIDs)
|
||||
sel.Include(sel.Libraries([]string{"foo"}, selectors.PrefixMatch()))
|
||||
return sel.Selector
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "SharePoint.Lists",
|
||||
comparator: assert.Less,
|
||||
sel: func() selectors.Selector {
|
||||
|
||||
cols, err := gc.DataCollections(ctx, sel.Selector, nil, control.Options{})
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, cols, 1)
|
||||
|
||||
for _, collection := range cols {
|
||||
t.Logf("Path: %s\n", collection.FullPath().String())
|
||||
assert.Equal(t, path.SharePointMetadataService, collection.FullPath().Service())
|
||||
}
|
||||
}
|
||||
|
||||
func (suite *ConnectorCreateSharePointCollectionIntegrationSuite) TestCreateSharePointCollection_Lists() {
|
||||
ctx, flush := tester.NewContext()
|
||||
defer flush()
|
||||
|
||||
var (
|
||||
t = suite.T()
|
||||
siteID = tester.M365SiteID(t)
|
||||
gc = loadConnector(ctx, t, Sites)
|
||||
siteIDs = []string{siteID}
|
||||
)
|
||||
|
||||
sel := selectors.NewSharePointBackup(siteIDs)
|
||||
sel.Include(sel.Lists(selectors.Any(), selectors.PrefixMatch()))
|
||||
|
||||
return sel.Selector
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range tables {
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
cols, err := gc.DataCollections(ctx, test.sel(), nil, control.Options{})
|
||||
cols, err := gc.DataCollections(ctx, sel.Selector, nil, control.Options{})
|
||||
require.NoError(t, err)
|
||||
test.comparator(t, 0, len(cols))
|
||||
assert.Less(t, 0, len(cols))
|
||||
|
||||
if test.name == "SharePoint.Lists" {
|
||||
for _, collection := range cols {
|
||||
t.Logf("Path: %s\n", collection.FullPath().String())
|
||||
|
||||
for item := range collection.Items() {
|
||||
t.Log("File: " + item.UUID())
|
||||
|
||||
bytes, err := io.ReadAll(item.ToReader())
|
||||
bs, err := io.ReadAll(item.ToReader())
|
||||
require.NoError(t, err)
|
||||
t.Log(string(bytes))
|
||||
|
||||
t.Log(string(bs))
|
||||
}
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@ -10,6 +10,7 @@ import (
|
||||
|
||||
"github.com/alcionai/corso/src/internal/connector/graph"
|
||||
"github.com/alcionai/corso/src/internal/connector/support"
|
||||
"github.com/alcionai/corso/src/pkg/path"
|
||||
)
|
||||
|
||||
const (
|
||||
@ -64,6 +65,49 @@ func Users(ctx context.Context, gs graph.Servicer, tenantID string) ([]models.Us
|
||||
return users, iterErrs
|
||||
}
|
||||
|
||||
type UserInfo struct {
|
||||
DiscoveredServices map[path.ServiceType]struct{}
|
||||
}
|
||||
|
||||
func User(ctx context.Context, gs graph.Servicer, userID string) (models.Userable, *UserInfo, error) {
|
||||
user, err := gs.Client().UsersById(userID).Get(ctx, nil)
|
||||
if err != nil {
|
||||
return nil, nil, errors.Wrapf(
|
||||
err,
|
||||
"retrieving resource for tenant: %s",
|
||||
support.ConnectorStackErrorTrace(err),
|
||||
)
|
||||
}
|
||||
|
||||
// Assume all services are enabled
|
||||
userInfo := &UserInfo{
|
||||
DiscoveredServices: map[path.ServiceType]struct{}{
|
||||
path.ExchangeService: {},
|
||||
path.OneDriveService: {},
|
||||
},
|
||||
}
|
||||
|
||||
// Discover which services the user has enabled
|
||||
|
||||
// Exchange: Query `MailFolders`
|
||||
_, err = gs.Client().UsersById(userID).MailFolders().Get(ctx, nil)
|
||||
if err != nil {
|
||||
if !graph.IsErrExchangeMailFolderNotFound(err) {
|
||||
return nil, nil, errors.Wrapf(
|
||||
err,
|
||||
"retrieving mail folders for tenant: %s",
|
||||
support.ConnectorStackErrorTrace(err),
|
||||
)
|
||||
}
|
||||
|
||||
delete(userInfo.DiscoveredServices, path.ExchangeService)
|
||||
}
|
||||
|
||||
// TODO: OneDrive
|
||||
|
||||
return user, userInfo, nil
|
||||
}
|
||||
|
||||
// parseUser extracts information from `models.Userable` we care about
|
||||
func parseUser(item interface{}) (models.Userable, error) {
|
||||
m, ok := item.(models.Userable)
|
||||
|
||||
@ -2,6 +2,7 @@ package api
|
||||
|
||||
import (
|
||||
"context"
|
||||
"time"
|
||||
|
||||
"github.com/microsoft/kiota-abstractions-go/serialization"
|
||||
"github.com/pkg/errors"
|
||||
@ -11,9 +12,11 @@ import (
|
||||
)
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// common types
|
||||
// common types and consts
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
const numberOfRetries = 3
|
||||
|
||||
// DeltaUpdate holds the results of a current delta token. It normally
|
||||
// gets produced when aggregating the addition and removal of items in
|
||||
// a delta-queriable folder.
|
||||
@ -106,3 +109,11 @@ func checkIDAndName(c graph.Container) error {
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func orNow(t *time.Time) time.Time {
|
||||
if t == nil {
|
||||
return time.Now().UTC()
|
||||
}
|
||||
|
||||
return *t
|
||||
}
|
||||
|
||||
@ -2,15 +2,19 @@ package api
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/hashicorp/go-multierror"
|
||||
"github.com/microsoft/kiota-abstractions-go/serialization"
|
||||
kioser "github.com/microsoft/kiota-serialization-json-go"
|
||||
"github.com/microsoftgraph/msgraph-sdk-go/models"
|
||||
"github.com/microsoftgraph/msgraph-sdk-go/users"
|
||||
"github.com/pkg/errors"
|
||||
|
||||
"github.com/alcionai/corso/src/internal/connector/graph"
|
||||
"github.com/alcionai/corso/src/internal/connector/support"
|
||||
"github.com/alcionai/corso/src/pkg/backup/details"
|
||||
)
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
@ -52,12 +56,17 @@ func (c Contacts) DeleteContactFolder(
|
||||
return c.stable.Client().UsersById(user).ContactFoldersById(folderID).Delete(ctx, nil)
|
||||
}
|
||||
|
||||
// RetrieveContactDataForUser is a GraphRetrievalFun that returns all associated fields.
|
||||
func (c Contacts) RetrieveContactDataForUser(
|
||||
// GetItem retrieves a Contactable item.
|
||||
func (c Contacts) GetItem(
|
||||
ctx context.Context,
|
||||
user, m365ID string,
|
||||
) (serialization.Parsable, error) {
|
||||
return c.stable.Client().UsersById(user).ContactsById(m365ID).Get(ctx, nil)
|
||||
user, itemID string,
|
||||
) (serialization.Parsable, *details.ExchangeInfo, error) {
|
||||
cont, err := c.stable.Client().UsersById(user).ContactsById(itemID).Get(ctx, nil)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
return cont, ContactInfo(cont), nil
|
||||
}
|
||||
|
||||
// GetAllContactFolderNamesForUser is a GraphQuery function for getting
|
||||
@ -224,3 +233,61 @@ func (c Contacts) GetAddedAndRemovedItemIDs(
|
||||
|
||||
return added, removed, DeltaUpdate{deltaURL, resetDelta}, errs.ErrorOrNil()
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Serialization
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// Serialize rserializes the item into a byte slice.
|
||||
func (c Contacts) Serialize(
|
||||
ctx context.Context,
|
||||
item serialization.Parsable,
|
||||
user, itemID string,
|
||||
) ([]byte, error) {
|
||||
contact, ok := item.(models.Contactable)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("expected Contactable, got %T", item)
|
||||
}
|
||||
|
||||
var (
|
||||
err error
|
||||
writer = kioser.NewJsonSerializationWriter()
|
||||
)
|
||||
|
||||
defer writer.Close()
|
||||
|
||||
if err = writer.WriteObjectValue("", contact); err != nil {
|
||||
return nil, support.SetNonRecoverableError(errors.Wrap(err, itemID))
|
||||
}
|
||||
|
||||
bs, err := writer.GetSerializedContent()
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "serializing contact")
|
||||
}
|
||||
|
||||
return bs, nil
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
func ContactInfo(contact models.Contactable) *details.ExchangeInfo {
|
||||
name := ""
|
||||
created := time.Time{}
|
||||
|
||||
if contact.GetDisplayName() != nil {
|
||||
name = *contact.GetDisplayName()
|
||||
}
|
||||
|
||||
if contact.GetCreatedDateTime() != nil {
|
||||
created = *contact.GetCreatedDateTime()
|
||||
}
|
||||
|
||||
return &details.ExchangeInfo{
|
||||
ItemType: details.ExchangeContact,
|
||||
ContactName: name,
|
||||
Created: created,
|
||||
Modified: orNow(contact.GetLastModifiedDateTime()),
|
||||
}
|
||||
}
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
package exchange
|
||||
package api
|
||||
|
||||
import (
|
||||
"testing"
|
||||
@ -11,15 +11,15 @@ import (
|
||||
"github.com/alcionai/corso/src/pkg/backup/details"
|
||||
)
|
||||
|
||||
type ContactSuite struct {
|
||||
type ContactsAPIUnitSuite struct {
|
||||
suite.Suite
|
||||
}
|
||||
|
||||
func TestContactSuite(t *testing.T) {
|
||||
suite.Run(t, &ContactSuite{})
|
||||
func TestContactsAPIUnitSuite(t *testing.T) {
|
||||
suite.Run(t, new(ContactsAPIUnitSuite))
|
||||
}
|
||||
|
||||
func (suite *ContactSuite) TestContactInfo() {
|
||||
func (suite *ContactsAPIUnitSuite) TestContactInfo() {
|
||||
initial := time.Now()
|
||||
|
||||
tests := []struct {
|
||||
@ -37,7 +37,6 @@ func (suite *ContactSuite) TestContactInfo() {
|
||||
ItemType: details.ExchangeContact,
|
||||
Created: initial,
|
||||
Modified: initial,
|
||||
Size: 10,
|
||||
}
|
||||
return contact, i
|
||||
},
|
||||
@ -54,7 +53,6 @@ func (suite *ContactSuite) TestContactInfo() {
|
||||
ContactName: aPerson,
|
||||
Created: initial,
|
||||
Modified: initial,
|
||||
Size: 10,
|
||||
}
|
||||
return contact, i
|
||||
},
|
||||
@ -63,7 +61,7 @@ func (suite *ContactSuite) TestContactInfo() {
|
||||
for _, test := range tests {
|
||||
suite.T().Run(test.name, func(t *testing.T) {
|
||||
contact, expected := test.contactAndRP()
|
||||
assert.Equal(t, expected, ContactInfo(contact, 10))
|
||||
assert.Equal(t, expected, ContactInfo(contact))
|
||||
})
|
||||
}
|
||||
}
|
||||
@ -2,15 +2,21 @@ package api
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/hashicorp/go-multierror"
|
||||
"github.com/microsoft/kiota-abstractions-go/serialization"
|
||||
kioser "github.com/microsoft/kiota-serialization-json-go"
|
||||
"github.com/microsoftgraph/msgraph-sdk-go/models"
|
||||
"github.com/microsoftgraph/msgraph-sdk-go/users"
|
||||
"github.com/pkg/errors"
|
||||
|
||||
"github.com/alcionai/corso/src/internal/common"
|
||||
"github.com/alcionai/corso/src/internal/connector/graph"
|
||||
"github.com/alcionai/corso/src/internal/connector/support"
|
||||
"github.com/alcionai/corso/src/pkg/backup/details"
|
||||
"github.com/alcionai/corso/src/pkg/logger"
|
||||
"github.com/alcionai/corso/src/pkg/path"
|
||||
)
|
||||
|
||||
@ -52,12 +58,17 @@ func (c Events) DeleteCalendar(
|
||||
return c.stable.Client().UsersById(user).CalendarsById(calendarID).Delete(ctx, nil)
|
||||
}
|
||||
|
||||
// RetrieveEventDataForUser is a GraphRetrievalFunc that returns event data.
|
||||
func (c Events) RetrieveEventDataForUser(
|
||||
// GetItem retrieves an Eventable item.
|
||||
func (c Events) GetItem(
|
||||
ctx context.Context,
|
||||
user, m365ID string,
|
||||
) (serialization.Parsable, error) {
|
||||
return c.stable.Client().UsersById(user).EventsById(m365ID).Get(ctx, nil)
|
||||
user, itemID string,
|
||||
) (serialization.Parsable, *details.ExchangeInfo, error) {
|
||||
evt, err := c.stable.Client().UsersById(user).EventsById(itemID).Get(ctx, nil)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
return evt, EventInfo(evt), nil
|
||||
}
|
||||
|
||||
func (c Client) GetAllCalendarNamesForUser(
|
||||
@ -190,6 +201,66 @@ func (c Events) GetAddedAndRemovedItemIDs(
|
||||
return added, nil, DeltaUpdate{}, errs.ErrorOrNil()
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Serialization
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// Serialize retrieves attachment data identified by the event item, and then
|
||||
// serializes it into a byte slice.
|
||||
func (c Events) Serialize(
|
||||
ctx context.Context,
|
||||
item serialization.Parsable,
|
||||
user, itemID string,
|
||||
) ([]byte, error) {
|
||||
event, ok := item.(models.Eventable)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("expected Eventable, got %T", item)
|
||||
}
|
||||
|
||||
var (
|
||||
err error
|
||||
writer = kioser.NewJsonSerializationWriter()
|
||||
)
|
||||
|
||||
defer writer.Close()
|
||||
|
||||
if *event.GetHasAttachments() {
|
||||
// getting all the attachments might take a couple attempts due to filesize
|
||||
var retriesErr error
|
||||
|
||||
for count := 0; count < numberOfRetries; count++ {
|
||||
attached, err := c.stable.
|
||||
Client().
|
||||
UsersById(user).
|
||||
EventsById(itemID).
|
||||
Attachments().
|
||||
Get(ctx, nil)
|
||||
retriesErr = err
|
||||
|
||||
if err == nil {
|
||||
event.SetAttachments(attached.GetValue())
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if retriesErr != nil {
|
||||
logger.Ctx(ctx).Debug("exceeded maximum retries")
|
||||
return nil, support.WrapAndAppend(itemID, errors.Wrap(retriesErr, "attachment failed"), nil)
|
||||
}
|
||||
}
|
||||
|
||||
if err = writer.WriteObjectValue("", event); err != nil {
|
||||
return nil, support.SetNonRecoverableError(errors.Wrap(err, itemID))
|
||||
}
|
||||
|
||||
bs, err := writer.GetSerializedContent()
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "serializing calendar event")
|
||||
}
|
||||
|
||||
return bs, nil
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// helper funcs
|
||||
// ---------------------------------------------------------------------------
|
||||
@ -216,3 +287,68 @@ func (c CalendarDisplayable) GetDisplayName() *string {
|
||||
func (c CalendarDisplayable) GetParentFolderId() *string {
|
||||
return nil
|
||||
}
|
||||
|
||||
func EventInfo(evt models.Eventable) *details.ExchangeInfo {
|
||||
var (
|
||||
organizer, subject string
|
||||
recurs bool
|
||||
start = time.Time{}
|
||||
end = time.Time{}
|
||||
created = time.Time{}
|
||||
)
|
||||
|
||||
if evt.GetOrganizer() != nil &&
|
||||
evt.GetOrganizer().GetEmailAddress() != nil &&
|
||||
evt.GetOrganizer().GetEmailAddress().GetAddress() != nil {
|
||||
organizer = *evt.GetOrganizer().
|
||||
GetEmailAddress().
|
||||
GetAddress()
|
||||
}
|
||||
|
||||
if evt.GetSubject() != nil {
|
||||
subject = *evt.GetSubject()
|
||||
}
|
||||
|
||||
if evt.GetRecurrence() != nil {
|
||||
recurs = true
|
||||
}
|
||||
|
||||
if evt.GetStart() != nil &&
|
||||
evt.GetStart().GetDateTime() != nil {
|
||||
// timeString has 'Z' literal added to ensure the stored
|
||||
// DateTime is not: time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC)
|
||||
startTime := *evt.GetStart().GetDateTime() + "Z"
|
||||
|
||||
output, err := common.ParseTime(startTime)
|
||||
if err == nil {
|
||||
start = output
|
||||
}
|
||||
}
|
||||
|
||||
if evt.GetEnd() != nil &&
|
||||
evt.GetEnd().GetDateTime() != nil {
|
||||
// timeString has 'Z' literal added to ensure the stored
|
||||
// DateTime is not: time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC)
|
||||
endTime := *evt.GetEnd().GetDateTime() + "Z"
|
||||
|
||||
output, err := common.ParseTime(endTime)
|
||||
if err == nil {
|
||||
end = output
|
||||
}
|
||||
}
|
||||
|
||||
if evt.GetCreatedDateTime() != nil {
|
||||
created = *evt.GetCreatedDateTime()
|
||||
}
|
||||
|
||||
return &details.ExchangeInfo{
|
||||
ItemType: details.ExchangeEvent,
|
||||
Organizer: organizer,
|
||||
Subject: subject,
|
||||
EventStart: start,
|
||||
EventEnd: end,
|
||||
EventRecurs: recurs,
|
||||
Created: created,
|
||||
Modified: orNow(evt.GetLastModifiedDateTime()),
|
||||
}
|
||||
}
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
package exchange
|
||||
package api
|
||||
|
||||
import (
|
||||
"testing"
|
||||
@ -15,17 +15,17 @@ import (
|
||||
"github.com/alcionai/corso/src/pkg/backup/details"
|
||||
)
|
||||
|
||||
type EventSuite struct {
|
||||
type EventsAPIUnitSuite struct {
|
||||
suite.Suite
|
||||
}
|
||||
|
||||
func TestEventSuite(t *testing.T) {
|
||||
suite.Run(t, &EventSuite{})
|
||||
func TestEventsAPIUnitSuite(t *testing.T) {
|
||||
suite.Run(t, new(EventsAPIUnitSuite))
|
||||
}
|
||||
|
||||
// TestEventInfo verifies that searchable event metadata
|
||||
// can be properly retrieved from a models.Eventable object
|
||||
func (suite *EventSuite) TestEventInfo() {
|
||||
func (suite *EventsAPIUnitSuite) TestEventInfo() {
|
||||
// Exchange stores start/end times in UTC and the below compares hours
|
||||
// directly so we need to "normalize" the timezone here.
|
||||
initial := time.Now().UTC()
|
||||
@ -136,7 +136,6 @@ func (suite *EventSuite) TestEventInfo() {
|
||||
Organizer: organizer,
|
||||
EventStart: eventTime,
|
||||
EventEnd: eventEndTime,
|
||||
Size: 10,
|
||||
}
|
||||
},
|
||||
},
|
||||
@ -144,7 +143,7 @@ func (suite *EventSuite) TestEventInfo() {
|
||||
for _, test := range tests {
|
||||
suite.T().Run(test.name, func(t *testing.T) {
|
||||
event, expected := test.evtAndRP()
|
||||
result := EventInfo(event, 10)
|
||||
result := EventInfo(event)
|
||||
|
||||
assert.Equal(t, expected.Subject, result.Subject, "subject")
|
||||
assert.Equal(t, expected.Sender, result.Sender, "sender")
|
||||
@ -2,15 +2,20 @@ package api
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/hashicorp/go-multierror"
|
||||
"github.com/microsoft/kiota-abstractions-go/serialization"
|
||||
kioser "github.com/microsoft/kiota-serialization-json-go"
|
||||
"github.com/microsoftgraph/msgraph-sdk-go/models"
|
||||
"github.com/microsoftgraph/msgraph-sdk-go/users"
|
||||
"github.com/pkg/errors"
|
||||
|
||||
"github.com/alcionai/corso/src/internal/connector/graph"
|
||||
"github.com/alcionai/corso/src/internal/connector/support"
|
||||
"github.com/alcionai/corso/src/pkg/backup/details"
|
||||
"github.com/alcionai/corso/src/pkg/logger"
|
||||
)
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
@ -92,12 +97,17 @@ func (c Mail) GetContainerByID(
|
||||
return service.Client().UsersById(userID).MailFoldersById(dirID).Get(ctx, ofmf)
|
||||
}
|
||||
|
||||
// RetrieveMessageDataForUser is a GraphRetrievalFunc that returns message data.
|
||||
func (c Mail) RetrieveMessageDataForUser(
|
||||
// GetItem retrieves a Messageable item.
|
||||
func (c Mail) GetItem(
|
||||
ctx context.Context,
|
||||
user, m365ID string,
|
||||
) (serialization.Parsable, error) {
|
||||
return c.stable.Client().UsersById(user).MessagesById(m365ID).Get(ctx, nil)
|
||||
user, itemID string,
|
||||
) (serialization.Parsable, *details.ExchangeInfo, error) {
|
||||
mail, err := c.stable.Client().UsersById(user).MessagesById(itemID).Get(ctx, nil)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
return mail, MailInfo(mail), nil
|
||||
}
|
||||
|
||||
// EnumerateContainers iterates through all of the users current
|
||||
@ -223,3 +233,101 @@ func (c Mail) GetAddedAndRemovedItemIDs(
|
||||
|
||||
return added, removed, DeltaUpdate{deltaURL, resetDelta}, errs.ErrorOrNil()
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Serialization
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// Serialize retrieves attachment data identified by the mail item, and then
|
||||
// serializes it into a byte slice.
|
||||
func (c Mail) Serialize(
|
||||
ctx context.Context,
|
||||
item serialization.Parsable,
|
||||
user, itemID string,
|
||||
) ([]byte, error) {
|
||||
msg, ok := item.(models.Messageable)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("expected Messageable, got %T", item)
|
||||
}
|
||||
|
||||
var (
|
||||
err error
|
||||
writer = kioser.NewJsonSerializationWriter()
|
||||
)
|
||||
|
||||
defer writer.Close()
|
||||
|
||||
if *msg.GetHasAttachments() {
|
||||
// getting all the attachments might take a couple attempts due to filesize
|
||||
var retriesErr error
|
||||
|
||||
for count := 0; count < numberOfRetries; count++ {
|
||||
attached, err := c.stable.
|
||||
Client().
|
||||
UsersById(user).
|
||||
MessagesById(itemID).
|
||||
Attachments().
|
||||
Get(ctx, nil)
|
||||
retriesErr = err
|
||||
|
||||
if err == nil {
|
||||
msg.SetAttachments(attached.GetValue())
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if retriesErr != nil {
|
||||
logger.Ctx(ctx).Debug("exceeded maximum retries")
|
||||
return nil, support.WrapAndAppend(itemID, errors.Wrap(retriesErr, "attachment failed"), nil)
|
||||
}
|
||||
}
|
||||
|
||||
if err = writer.WriteObjectValue("", msg); err != nil {
|
||||
return nil, support.SetNonRecoverableError(errors.Wrap(err, itemID))
|
||||
}
|
||||
|
||||
bs, err := writer.GetSerializedContent()
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "serializing email")
|
||||
}
|
||||
|
||||
return bs, nil
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
func MailInfo(msg models.Messageable) *details.ExchangeInfo {
|
||||
sender := ""
|
||||
subject := ""
|
||||
received := time.Time{}
|
||||
created := time.Time{}
|
||||
|
||||
if msg.GetSender() != nil &&
|
||||
msg.GetSender().GetEmailAddress() != nil &&
|
||||
msg.GetSender().GetEmailAddress().GetAddress() != nil {
|
||||
sender = *msg.GetSender().GetEmailAddress().GetAddress()
|
||||
}
|
||||
|
||||
if msg.GetSubject() != nil {
|
||||
subject = *msg.GetSubject()
|
||||
}
|
||||
|
||||
if msg.GetReceivedDateTime() != nil {
|
||||
received = *msg.GetReceivedDateTime()
|
||||
}
|
||||
|
||||
if msg.GetCreatedDateTime() != nil {
|
||||
created = *msg.GetCreatedDateTime()
|
||||
}
|
||||
|
||||
return &details.ExchangeInfo{
|
||||
ItemType: details.ExchangeMail,
|
||||
Sender: sender,
|
||||
Subject: subject,
|
||||
Received: received,
|
||||
Created: created,
|
||||
Modified: orNow(msg.GetLastModifiedDateTime()),
|
||||
}
|
||||
}
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
package exchange
|
||||
package api
|
||||
|
||||
import (
|
||||
"testing"
|
||||
@ -10,15 +10,15 @@ import (
|
||||
"github.com/alcionai/corso/src/pkg/backup/details"
|
||||
)
|
||||
|
||||
type MessageSuite struct {
|
||||
type MailAPIUnitSuite struct {
|
||||
suite.Suite
|
||||
}
|
||||
|
||||
func TestMessageSuite(t *testing.T) {
|
||||
suite.Run(t, &MessageSuite{})
|
||||
func TestMailAPIUnitSuite(t *testing.T) {
|
||||
suite.Run(t, new(MailAPIUnitSuite))
|
||||
}
|
||||
|
||||
func (suite *MessageSuite) TestMessageInfo() {
|
||||
func (suite *MailAPIUnitSuite) TestMailInfo() {
|
||||
initial := time.Now()
|
||||
|
||||
tests := []struct {
|
||||
@ -36,7 +36,6 @@ func (suite *MessageSuite) TestMessageInfo() {
|
||||
ItemType: details.ExchangeMail,
|
||||
Created: initial,
|
||||
Modified: initial,
|
||||
Size: 10,
|
||||
}
|
||||
return msg, i
|
||||
},
|
||||
@ -58,7 +57,6 @@ func (suite *MessageSuite) TestMessageInfo() {
|
||||
Sender: sender,
|
||||
Created: initial,
|
||||
Modified: initial,
|
||||
Size: 10,
|
||||
}
|
||||
return msg, i
|
||||
},
|
||||
@ -76,7 +74,6 @@ func (suite *MessageSuite) TestMessageInfo() {
|
||||
Subject: subject,
|
||||
Created: initial,
|
||||
Modified: initial,
|
||||
Size: 10,
|
||||
}
|
||||
return msg, i
|
||||
},
|
||||
@ -94,7 +91,6 @@ func (suite *MessageSuite) TestMessageInfo() {
|
||||
Received: now,
|
||||
Created: initial,
|
||||
Modified: initial,
|
||||
Size: 10,
|
||||
}
|
||||
return msg, i
|
||||
},
|
||||
@ -122,7 +118,6 @@ func (suite *MessageSuite) TestMessageInfo() {
|
||||
Received: now,
|
||||
Created: initial,
|
||||
Modified: initial,
|
||||
Size: 10,
|
||||
}
|
||||
return msg, i
|
||||
},
|
||||
@ -131,7 +126,7 @@ func (suite *MessageSuite) TestMessageInfo() {
|
||||
for _, tt := range tests {
|
||||
suite.T().Run(tt.name, func(t *testing.T) {
|
||||
msg, expected := tt.msgAndRP()
|
||||
suite.Equal(expected, MessageInfo(msg, 10))
|
||||
suite.Equal(expected, MailInfo(msg))
|
||||
})
|
||||
}
|
||||
}
|
||||
@ -1,36 +0,0 @@
|
||||
package exchange
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/microsoftgraph/msgraph-sdk-go/models"
|
||||
|
||||
"github.com/alcionai/corso/src/pkg/backup/details"
|
||||
)
|
||||
|
||||
// ContactInfo translate models.Contactable metadata into searchable content
|
||||
func ContactInfo(contact models.Contactable, size int64) *details.ExchangeInfo {
|
||||
name := ""
|
||||
created := time.Time{}
|
||||
modified := time.Time{}
|
||||
|
||||
if contact.GetDisplayName() != nil {
|
||||
name = *contact.GetDisplayName()
|
||||
}
|
||||
|
||||
if contact.GetCreatedDateTime() != nil {
|
||||
created = *contact.GetCreatedDateTime()
|
||||
}
|
||||
|
||||
if contact.GetLastModifiedDateTime() != nil {
|
||||
modified = *contact.GetLastModifiedDateTime()
|
||||
}
|
||||
|
||||
return &details.ExchangeInfo{
|
||||
ItemType: details.ExchangeContact,
|
||||
ContactName: name,
|
||||
Created: created,
|
||||
Modified: modified,
|
||||
Size: size,
|
||||
}
|
||||
}
|
||||
@ -26,15 +26,13 @@ func (cfc *contactFolderCache) populateContactRoot(
|
||||
) error {
|
||||
f, err := cfc.getter.GetContainerByID(ctx, cfc.userID, directoryID)
|
||||
if err != nil {
|
||||
return errors.Wrapf(
|
||||
err,
|
||||
"fetching root contact folder: "+support.ConnectorStackErrorTrace(err))
|
||||
return support.ConnectorStackErrorTraceWrap(err, "fetching root folder")
|
||||
}
|
||||
|
||||
temp := graph.NewCacheFolder(f, path.Builder{}.Append(baseContainerPath...))
|
||||
|
||||
if err := cfc.addFolder(temp); err != nil {
|
||||
return errors.Wrap(err, "adding cache root")
|
||||
return errors.Wrap(err, "adding resolver dir")
|
||||
}
|
||||
|
||||
return nil
|
||||
@ -50,16 +48,16 @@ func (cfc *contactFolderCache) Populate(
|
||||
baseContainerPather ...string,
|
||||
) error {
|
||||
if err := cfc.init(ctx, baseID, baseContainerPather); err != nil {
|
||||
return err
|
||||
return errors.Wrap(err, "initializing")
|
||||
}
|
||||
|
||||
err := cfc.enumer.EnumerateContainers(ctx, cfc.userID, baseID, cfc.addFolder)
|
||||
if err != nil {
|
||||
return err
|
||||
return errors.Wrap(err, "enumerating containers")
|
||||
}
|
||||
|
||||
if err := cfc.populatePaths(ctx); err != nil {
|
||||
return errors.Wrap(err, "contacts resolver")
|
||||
return errors.Wrap(err, "populating paths")
|
||||
}
|
||||
|
||||
return nil
|
||||
|
||||
@ -251,7 +251,7 @@ func createCollections(
|
||||
Credentials: creds,
|
||||
}
|
||||
|
||||
foldersComplete, closer := observe.MessageWithCompletion(fmt.Sprintf("∙ %s - %s:", qp.Category, user))
|
||||
foldersComplete, closer := observe.MessageWithCompletion(ctx, observe.Bulletf("%s - %s", qp.Category, user))
|
||||
defer closer()
|
||||
defer close(foldersComplete)
|
||||
|
||||
|
||||
@ -1,82 +0,0 @@
|
||||
package exchange
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/microsoftgraph/msgraph-sdk-go/models"
|
||||
|
||||
"github.com/alcionai/corso/src/internal/common"
|
||||
"github.com/alcionai/corso/src/pkg/backup/details"
|
||||
)
|
||||
|
||||
// EventInfo searchable metadata for stored event objects.
|
||||
func EventInfo(evt models.Eventable, size int64) *details.ExchangeInfo {
|
||||
var (
|
||||
organizer, subject string
|
||||
recurs bool
|
||||
start = time.Time{}
|
||||
end = time.Time{}
|
||||
created = time.Time{}
|
||||
modified = time.Time{}
|
||||
)
|
||||
|
||||
if evt.GetOrganizer() != nil &&
|
||||
evt.GetOrganizer().GetEmailAddress() != nil &&
|
||||
evt.GetOrganizer().GetEmailAddress().GetAddress() != nil {
|
||||
organizer = *evt.GetOrganizer().
|
||||
GetEmailAddress().
|
||||
GetAddress()
|
||||
}
|
||||
|
||||
if evt.GetSubject() != nil {
|
||||
subject = *evt.GetSubject()
|
||||
}
|
||||
|
||||
if evt.GetRecurrence() != nil {
|
||||
recurs = true
|
||||
}
|
||||
|
||||
if evt.GetStart() != nil &&
|
||||
evt.GetStart().GetDateTime() != nil {
|
||||
// timeString has 'Z' literal added to ensure the stored
|
||||
// DateTime is not: time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC)
|
||||
startTime := *evt.GetStart().GetDateTime() + "Z"
|
||||
|
||||
output, err := common.ParseTime(startTime)
|
||||
if err == nil {
|
||||
start = output
|
||||
}
|
||||
}
|
||||
|
||||
if evt.GetEnd() != nil &&
|
||||
evt.GetEnd().GetDateTime() != nil {
|
||||
// timeString has 'Z' literal added to ensure the stored
|
||||
// DateTime is not: time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC)
|
||||
endTime := *evt.GetEnd().GetDateTime() + "Z"
|
||||
|
||||
output, err := common.ParseTime(endTime)
|
||||
if err == nil {
|
||||
end = output
|
||||
}
|
||||
}
|
||||
|
||||
if evt.GetCreatedDateTime() != nil {
|
||||
created = *evt.GetCreatedDateTime()
|
||||
}
|
||||
|
||||
if evt.GetLastModifiedDateTime() != nil {
|
||||
modified = *evt.GetLastModifiedDateTime()
|
||||
}
|
||||
|
||||
return &details.ExchangeInfo{
|
||||
ItemType: details.ExchangeEvent,
|
||||
Organizer: organizer,
|
||||
Subject: subject,
|
||||
EventStart: start,
|
||||
EventEnd: end,
|
||||
EventRecurs: recurs,
|
||||
Created: created,
|
||||
Modified: modified,
|
||||
Size: size,
|
||||
}
|
||||
}
|
||||
@ -31,7 +31,7 @@ func (ecc *eventCalendarCache) Populate(
|
||||
|
||||
err := ecc.enumer.EnumerateContainers(ctx, ecc.userID, "", ecc.addFolder)
|
||||
if err != nil {
|
||||
return err
|
||||
return errors.Wrap(err, "enumerating containers")
|
||||
}
|
||||
|
||||
return nil
|
||||
@ -41,20 +41,20 @@ func (ecc *eventCalendarCache) Populate(
|
||||
// @returns error iff the required values are not accessible.
|
||||
func (ecc *eventCalendarCache) AddToCache(ctx context.Context, f graph.Container) error {
|
||||
if err := checkIDAndName(f); err != nil {
|
||||
return errors.Wrap(err, "adding cache folder")
|
||||
return errors.Wrap(err, "validating container")
|
||||
}
|
||||
|
||||
temp := graph.NewCacheFolder(f, path.Builder{}.Append(*f.GetDisplayName()))
|
||||
|
||||
if err := ecc.addFolder(temp); err != nil {
|
||||
return errors.Wrap(err, "adding cache folder")
|
||||
return errors.Wrap(err, "adding container")
|
||||
}
|
||||
|
||||
// Populate the path for this entry so calls to PathInCache succeed no matter
|
||||
// when they're made.
|
||||
_, err := ecc.IDToPath(ctx, *f.GetId())
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "adding cache entry")
|
||||
return errors.Wrap(err, "setting path to container id")
|
||||
}
|
||||
|
||||
return nil
|
||||
|
||||
@ -6,18 +6,13 @@ package exchange
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
absser "github.com/microsoft/kiota-abstractions-go/serialization"
|
||||
"github.com/microsoftgraph/msgraph-sdk-go/models"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/microsoft/kiota-abstractions-go/serialization"
|
||||
|
||||
"github.com/alcionai/corso/src/internal/connector/exchange/api"
|
||||
"github.com/alcionai/corso/src/internal/connector/graph"
|
||||
"github.com/alcionai/corso/src/internal/connector/support"
|
||||
"github.com/alcionai/corso/src/internal/data"
|
||||
"github.com/alcionai/corso/src/internal/observe"
|
||||
@ -43,6 +38,18 @@ const (
|
||||
urlPrefetchChannelBufferSize = 4
|
||||
)
|
||||
|
||||
type itemer interface {
|
||||
GetItem(
|
||||
ctx context.Context,
|
||||
user, itemID string,
|
||||
) (serialization.Parsable, *details.ExchangeInfo, error)
|
||||
Serialize(
|
||||
ctx context.Context,
|
||||
item serialization.Parsable,
|
||||
user, itemID string,
|
||||
) ([]byte, error)
|
||||
}
|
||||
|
||||
// Collection implements the interface from data.Collection
|
||||
// Structure holds data for an Exchange application for a single user
|
||||
type Collection struct {
|
||||
@ -51,13 +58,11 @@ type Collection struct {
|
||||
data chan data.Stream
|
||||
|
||||
// added is a list of existing item IDs that were added to a container
|
||||
added []string
|
||||
added map[string]struct{}
|
||||
// removed is a list of item IDs that were deleted from, or moved out, of a container
|
||||
removed []string
|
||||
removed map[string]struct{}
|
||||
|
||||
// service - client/adapter pair used to access M365 back store
|
||||
service graph.Servicer
|
||||
ac api.Client
|
||||
items itemer
|
||||
|
||||
category path.CategoryType
|
||||
statusUpdater support.StatusUpdater
|
||||
@ -87,26 +92,24 @@ func NewCollection(
|
||||
user string,
|
||||
curr, prev path.Path,
|
||||
category path.CategoryType,
|
||||
ac api.Client,
|
||||
service graph.Servicer,
|
||||
items itemer,
|
||||
statusUpdater support.StatusUpdater,
|
||||
ctrlOpts control.Options,
|
||||
doNotMergeItems bool,
|
||||
) Collection {
|
||||
collection := Collection{
|
||||
ac: ac,
|
||||
category: category,
|
||||
ctrl: ctrlOpts,
|
||||
data: make(chan data.Stream, collectionChannelBufferSize),
|
||||
doNotMergeItems: doNotMergeItems,
|
||||
fullPath: curr,
|
||||
added: make([]string, 0),
|
||||
removed: make([]string, 0),
|
||||
added: make(map[string]struct{}, 0),
|
||||
removed: make(map[string]struct{}, 0),
|
||||
prevPath: prev,
|
||||
service: service,
|
||||
state: stateOf(prev, curr),
|
||||
statusUpdater: statusUpdater,
|
||||
user: user,
|
||||
items: items,
|
||||
}
|
||||
|
||||
return collection
|
||||
@ -135,22 +138,6 @@ func (col *Collection) Items() <-chan data.Stream {
|
||||
return col.data
|
||||
}
|
||||
|
||||
// GetQueryAndSerializeFunc helper function that returns the two functions functions
|
||||
// required to convert M365 identifier into a byte array filled with the serialized data
|
||||
func GetQueryAndSerializeFunc(ac api.Client, category path.CategoryType) (api.GraphRetrievalFunc, GraphSerializeFunc) {
|
||||
switch category {
|
||||
case path.ContactsCategory:
|
||||
return ac.Contacts().RetrieveContactDataForUser, serializeAndStreamContact
|
||||
case path.EventsCategory:
|
||||
return ac.Events().RetrieveEventDataForUser, serializeAndStreamEvent
|
||||
case path.EmailCategory:
|
||||
return ac.Mail().RetrieveMessageDataForUser, serializeAndStreamMessage
|
||||
// Unsupported options returns nil, nil
|
||||
default:
|
||||
return nil, nil
|
||||
}
|
||||
}
|
||||
|
||||
// FullPath returns the Collection's fullPath []string
|
||||
func (col *Collection) FullPath() path.Path {
|
||||
return col.fullPath
|
||||
@ -193,7 +180,11 @@ func (col *Collection) streamItems(ctx context.Context) {
|
||||
|
||||
if len(col.added)+len(col.removed) > 0 {
|
||||
var closer func()
|
||||
colProgress, closer = observe.CollectionProgress(user, col.fullPath.Category().String(), col.fullPath.Folder())
|
||||
colProgress, closer = observe.CollectionProgress(
|
||||
ctx,
|
||||
user,
|
||||
col.fullPath.Category().String(),
|
||||
col.fullPath.Folder())
|
||||
|
||||
go closer()
|
||||
|
||||
@ -202,15 +193,6 @@ func (col *Collection) streamItems(ctx context.Context) {
|
||||
}()
|
||||
}
|
||||
|
||||
// get QueryBasedonIdentifier
|
||||
// verify that it is the correct type in called function
|
||||
// serializationFunction
|
||||
query, serializeFunc := GetQueryAndSerializeFunc(col.ac, col.category)
|
||||
if query == nil {
|
||||
errs = fmt.Errorf("unrecognized collection type: %s", col.category)
|
||||
return
|
||||
}
|
||||
|
||||
// Limit the max number of active requests to GC
|
||||
semaphoreCh := make(chan struct{}, urlPrefetchChannelBufferSize)
|
||||
defer close(semaphoreCh)
|
||||
@ -220,7 +202,7 @@ func (col *Collection) streamItems(ctx context.Context) {
|
||||
}
|
||||
|
||||
// delete all removed items
|
||||
for _, id := range col.removed {
|
||||
for id := range col.removed {
|
||||
semaphoreCh <- struct{}{}
|
||||
|
||||
wg.Add(1)
|
||||
@ -245,7 +227,7 @@ func (col *Collection) streamItems(ctx context.Context) {
|
||||
}
|
||||
|
||||
// add any new items
|
||||
for _, id := range col.added {
|
||||
for id := range col.added {
|
||||
if col.ctrl.FailFast && errs != nil {
|
||||
break
|
||||
}
|
||||
@ -259,16 +241,17 @@ func (col *Collection) streamItems(ctx context.Context) {
|
||||
defer func() { <-semaphoreCh }()
|
||||
|
||||
var (
|
||||
response absser.Parsable
|
||||
item serialization.Parsable
|
||||
info *details.ExchangeInfo
|
||||
err error
|
||||
)
|
||||
|
||||
for i := 1; i <= numberOfRetries; i++ {
|
||||
response, err = query(ctx, user, id)
|
||||
item, info, err = col.items.GetItem(ctx, user, id)
|
||||
if err == nil {
|
||||
break
|
||||
}
|
||||
// TODO: Tweak sleep times
|
||||
|
||||
if i < numberOfRetries {
|
||||
time.Sleep(time.Duration(3*(i+1)) * time.Second)
|
||||
}
|
||||
@ -279,19 +262,23 @@ func (col *Collection) streamItems(ctx context.Context) {
|
||||
return
|
||||
}
|
||||
|
||||
byteCount, err := serializeFunc(
|
||||
ctx,
|
||||
col.service,
|
||||
col.data,
|
||||
response,
|
||||
user)
|
||||
data, err := col.items.Serialize(ctx, item, user, id)
|
||||
if err != nil {
|
||||
errUpdater(user, err)
|
||||
return
|
||||
}
|
||||
|
||||
info.Size = int64(len(data))
|
||||
|
||||
col.data <- &Stream{
|
||||
id: id,
|
||||
message: data,
|
||||
info: info,
|
||||
modTime: info.Modified,
|
||||
}
|
||||
|
||||
atomic.AddInt64(&success, 1)
|
||||
atomic.AddInt64(&totalBytes, int64(byteCount))
|
||||
atomic.AddInt64(&totalBytes, info.Size)
|
||||
|
||||
if colProgress != nil {
|
||||
colProgress <- struct{}{}
|
||||
@ -317,181 +304,10 @@ func (col *Collection) finishPopulation(ctx context.Context, success int, totalB
|
||||
},
|
||||
errs,
|
||||
col.fullPath.Folder())
|
||||
logger.Ctx(ctx).Debug(status.String())
|
||||
logger.Ctx(ctx).Debugw("done streaming items", "status", status.String())
|
||||
col.statusUpdater(status)
|
||||
}
|
||||
|
||||
type modTimer interface {
|
||||
GetLastModifiedDateTime() *time.Time
|
||||
}
|
||||
|
||||
func getModTime(mt modTimer) time.Time {
|
||||
res := time.Now().UTC()
|
||||
|
||||
if t := mt.GetLastModifiedDateTime(); t != nil {
|
||||
res = *t
|
||||
}
|
||||
|
||||
return res
|
||||
}
|
||||
|
||||
// GraphSerializeFunc are class of functions that are used by Collections to transform GraphRetrievalFunc
|
||||
// responses into data.Stream items contained within the Collection
|
||||
type GraphSerializeFunc func(
|
||||
ctx context.Context,
|
||||
service graph.Servicer,
|
||||
dataChannel chan<- data.Stream,
|
||||
parsable absser.Parsable,
|
||||
user string,
|
||||
) (int, error)
|
||||
|
||||
// serializeAndStreamEvent is a GraphSerializeFunc used to serialize models.Eventable objects into
|
||||
// data.Stream objects. Returns an error the process finishes unsuccessfully.
|
||||
func serializeAndStreamEvent(
|
||||
ctx context.Context,
|
||||
service graph.Servicer,
|
||||
dataChannel chan<- data.Stream,
|
||||
parsable absser.Parsable,
|
||||
user string,
|
||||
) (int, error) {
|
||||
var err error
|
||||
|
||||
event, ok := parsable.(models.Eventable)
|
||||
if !ok {
|
||||
return 0, fmt.Errorf("expected Eventable, got %T", parsable)
|
||||
}
|
||||
|
||||
if *event.GetHasAttachments() {
|
||||
var retriesErr error
|
||||
|
||||
for count := 0; count < numberOfRetries; count++ {
|
||||
attached, err := service.
|
||||
Client().
|
||||
UsersById(user).
|
||||
EventsById(*event.GetId()).
|
||||
Attachments().
|
||||
Get(ctx, nil)
|
||||
retriesErr = err
|
||||
|
||||
if err == nil && attached != nil {
|
||||
event.SetAttachments(attached.GetValue())
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if retriesErr != nil {
|
||||
logger.Ctx(ctx).Debug("exceeded maximum retries")
|
||||
|
||||
return 0, support.WrapAndAppend(
|
||||
*event.GetId(),
|
||||
errors.Wrap(retriesErr, "attachment failed"),
|
||||
nil)
|
||||
}
|
||||
}
|
||||
|
||||
byteArray, err := service.Serialize(event)
|
||||
if err != nil {
|
||||
return 0, support.WrapAndAppend(*event.GetId(), errors.Wrap(err, "serializing content"), nil)
|
||||
}
|
||||
|
||||
if len(byteArray) > 0 {
|
||||
dataChannel <- &Stream{
|
||||
id: *event.GetId(),
|
||||
message: byteArray,
|
||||
info: EventInfo(event, int64(len(byteArray))),
|
||||
modTime: getModTime(event),
|
||||
}
|
||||
}
|
||||
|
||||
return len(byteArray), nil
|
||||
}
|
||||
|
||||
// serializeAndStreamContact is a GraphSerializeFunc for models.Contactable
|
||||
func serializeAndStreamContact(
|
||||
ctx context.Context,
|
||||
service graph.Servicer,
|
||||
dataChannel chan<- data.Stream,
|
||||
parsable absser.Parsable,
|
||||
user string,
|
||||
) (int, error) {
|
||||
contact, ok := parsable.(models.Contactable)
|
||||
if !ok {
|
||||
return 0, fmt.Errorf("expected Contactable, got %T", parsable)
|
||||
}
|
||||
|
||||
bs, err := service.Serialize(contact)
|
||||
if err != nil {
|
||||
return 0, support.WrapAndAppend(*contact.GetId(), err, nil)
|
||||
}
|
||||
|
||||
if len(bs) > 0 {
|
||||
dataChannel <- &Stream{
|
||||
id: *contact.GetId(),
|
||||
message: bs,
|
||||
info: ContactInfo(contact, int64(len(bs))),
|
||||
modTime: getModTime(contact),
|
||||
}
|
||||
}
|
||||
|
||||
return len(bs), nil
|
||||
}
|
||||
|
||||
// serializeAndStreamMessage is the GraphSerializeFunc for models.Messageable
|
||||
func serializeAndStreamMessage(
|
||||
ctx context.Context,
|
||||
service graph.Servicer,
|
||||
dataChannel chan<- data.Stream,
|
||||
parsable absser.Parsable,
|
||||
user string,
|
||||
) (int, error) {
|
||||
var err error
|
||||
|
||||
msg, ok := parsable.(models.Messageable)
|
||||
if !ok {
|
||||
return 0, fmt.Errorf("expected Messageable, got %T", parsable)
|
||||
}
|
||||
|
||||
if *msg.GetHasAttachments() {
|
||||
// getting all the attachments might take a couple attempts due to filesize
|
||||
var retriesErr error
|
||||
|
||||
for count := 0; count < numberOfRetries; count++ {
|
||||
attached, err := service.
|
||||
Client().
|
||||
UsersById(user).
|
||||
MessagesById(*msg.GetId()).
|
||||
Attachments().
|
||||
Get(ctx, nil)
|
||||
retriesErr = err
|
||||
|
||||
if err == nil {
|
||||
msg.SetAttachments(attached.GetValue())
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if retriesErr != nil {
|
||||
logger.Ctx(ctx).Debug("exceeded maximum retries")
|
||||
return 0, support.WrapAndAppend(*msg.GetId(), errors.Wrap(retriesErr, "attachment failed"), nil)
|
||||
}
|
||||
}
|
||||
|
||||
bs, err := service.Serialize(msg)
|
||||
if err != nil {
|
||||
err = support.WrapAndAppend(*msg.GetId(), errors.Wrap(err, "serializing mail content"), nil)
|
||||
return 0, support.SetNonRecoverableError(err)
|
||||
}
|
||||
|
||||
dataChannel <- &Stream{
|
||||
id: *msg.GetId(),
|
||||
message: bs,
|
||||
info: MessageInfo(msg, int64(len(bs))),
|
||||
modTime: getModTime(msg),
|
||||
}
|
||||
|
||||
return len(bs), nil
|
||||
}
|
||||
|
||||
// Stream represents a single item retrieved from exchange
|
||||
type Stream struct {
|
||||
id string
|
||||
|
||||
@ -2,18 +2,33 @@ package exchange
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/microsoft/kiota-abstractions-go/serialization"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"github.com/stretchr/testify/suite"
|
||||
|
||||
"github.com/alcionai/corso/src/internal/connector/exchange/api"
|
||||
"github.com/alcionai/corso/src/internal/data"
|
||||
"github.com/alcionai/corso/src/pkg/backup/details"
|
||||
"github.com/alcionai/corso/src/pkg/control"
|
||||
"github.com/alcionai/corso/src/pkg/path"
|
||||
)
|
||||
|
||||
type mockItemer struct{}
|
||||
|
||||
func (mi mockItemer) GetItem(
|
||||
context.Context,
|
||||
string, string,
|
||||
) (serialization.Parsable, *details.ExchangeInfo, error) {
|
||||
return nil, nil, nil
|
||||
}
|
||||
|
||||
func (mi mockItemer) Serialize(context.Context, serialization.Parsable, string, string) ([]byte, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
type ExchangeDataCollectionSuite struct {
|
||||
suite.Suite
|
||||
}
|
||||
@ -137,7 +152,9 @@ func (suite *ExchangeDataCollectionSuite) TestNewCollection_state() {
|
||||
c := NewCollection(
|
||||
"u",
|
||||
test.curr, test.prev,
|
||||
0, api.Client{}, nil, nil, control.Options{},
|
||||
0,
|
||||
mockItemer{}, nil,
|
||||
control.Options{},
|
||||
false)
|
||||
assert.Equal(t, test.expect, c.State())
|
||||
})
|
||||
|
||||
@ -35,7 +35,7 @@ func (mc *mailFolderCache) populateMailRoot(
|
||||
|
||||
f, err := mc.getter.GetContainerByID(ctx, mc.userID, fldr)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "fetching root folder"+support.ConnectorStackErrorTrace(err))
|
||||
return support.ConnectorStackErrorTraceWrap(err, "fetching root folder")
|
||||
}
|
||||
|
||||
if fldr == DefaultMailFolder {
|
||||
@ -44,7 +44,7 @@ func (mc *mailFolderCache) populateMailRoot(
|
||||
|
||||
temp := graph.NewCacheFolder(f, path.Builder{}.Append(directory))
|
||||
if err := mc.addFolder(temp); err != nil {
|
||||
return errors.Wrap(err, "initializing mail resolver")
|
||||
return errors.Wrap(err, "adding resolver dir")
|
||||
}
|
||||
}
|
||||
|
||||
@ -62,16 +62,16 @@ func (mc *mailFolderCache) Populate(
|
||||
baseContainerPath ...string,
|
||||
) error {
|
||||
if err := mc.init(ctx); err != nil {
|
||||
return err
|
||||
return errors.Wrap(err, "initializing")
|
||||
}
|
||||
|
||||
err := mc.enumer.EnumerateContainers(ctx, mc.userID, "", mc.addFolder)
|
||||
if err != nil {
|
||||
return err
|
||||
return errors.Wrap(err, "enumerating containers")
|
||||
}
|
||||
|
||||
if err := mc.populatePaths(ctx); err != nil {
|
||||
return errors.Wrap(err, "mail resolver")
|
||||
return errors.Wrap(err, "populating paths")
|
||||
}
|
||||
|
||||
return nil
|
||||
|
||||
@ -1,49 +0,0 @@
|
||||
package exchange
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/microsoftgraph/msgraph-sdk-go/models"
|
||||
|
||||
"github.com/alcionai/corso/src/pkg/backup/details"
|
||||
)
|
||||
|
||||
func MessageInfo(msg models.Messageable, size int64) *details.ExchangeInfo {
|
||||
sender := ""
|
||||
subject := ""
|
||||
received := time.Time{}
|
||||
created := time.Time{}
|
||||
modified := time.Time{}
|
||||
|
||||
if msg.GetSender() != nil &&
|
||||
msg.GetSender().GetEmailAddress() != nil &&
|
||||
msg.GetSender().GetEmailAddress().GetAddress() != nil {
|
||||
sender = *msg.GetSender().GetEmailAddress().GetAddress()
|
||||
}
|
||||
|
||||
if msg.GetSubject() != nil {
|
||||
subject = *msg.GetSubject()
|
||||
}
|
||||
|
||||
if msg.GetReceivedDateTime() != nil {
|
||||
received = *msg.GetReceivedDateTime()
|
||||
}
|
||||
|
||||
if msg.GetCreatedDateTime() != nil {
|
||||
created = *msg.GetCreatedDateTime()
|
||||
}
|
||||
|
||||
if msg.GetLastModifiedDateTime() != nil {
|
||||
modified = *msg.GetLastModifiedDateTime()
|
||||
}
|
||||
|
||||
return &details.ExchangeInfo{
|
||||
ItemType: details.ExchangeMail,
|
||||
Sender: sender,
|
||||
Subject: subject,
|
||||
Received: received,
|
||||
Created: created,
|
||||
Modified: modified,
|
||||
Size: size,
|
||||
}
|
||||
}
|
||||
@ -2,6 +2,7 @@ package exchange
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
|
||||
@ -56,19 +57,16 @@ func filterContainersAndFillCollections(
|
||||
return err
|
||||
}
|
||||
|
||||
ibt, err := itemerByType(ac, scope.Category().PathType())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, c := range resolver.Items() {
|
||||
if ctrlOpts.FailFast && errs != nil {
|
||||
return errs
|
||||
}
|
||||
|
||||
// cannot be moved out of the loop,
|
||||
// else we run into state issues.
|
||||
service, err := createService(qp.Credentials)
|
||||
if err != nil {
|
||||
errs = support.WrapAndAppend(qp.ResourceOwner, err, errs)
|
||||
continue
|
||||
}
|
||||
|
||||
cID := *c.GetId()
|
||||
delete(tombstones, cID)
|
||||
|
||||
@ -118,15 +116,24 @@ func filterContainersAndFillCollections(
|
||||
currPath,
|
||||
prevPath,
|
||||
scope.Category().PathType(),
|
||||
ac,
|
||||
service,
|
||||
ibt,
|
||||
statusUpdater,
|
||||
ctrlOpts,
|
||||
newDelta.Reset)
|
||||
|
||||
collections[cID] = &edc
|
||||
edc.added = append(edc.added, added...)
|
||||
edc.removed = append(edc.removed, removed...)
|
||||
|
||||
for _, add := range added {
|
||||
edc.added[add] = struct{}{}
|
||||
}
|
||||
|
||||
// Remove any deleted IDs from the set of added IDs because items that are
|
||||
// deleted and then restored will have a different ID than they did
|
||||
// originally.
|
||||
for _, remove := range removed {
|
||||
delete(edc.added, remove)
|
||||
edc.removed[remove] = struct{}{}
|
||||
}
|
||||
|
||||
// add the current path for the container ID to be used in the next backup
|
||||
// as the "previous path", for reference in case of a rename or relocation.
|
||||
@ -138,12 +145,6 @@ func filterContainersAndFillCollections(
|
||||
// in the `previousPath` set, but does not exist in the current container
|
||||
// resolver (which contains all the resource owners' current containers).
|
||||
for id, p := range tombstones {
|
||||
service, err := createService(qp.Credentials)
|
||||
if err != nil {
|
||||
errs = support.WrapAndAppend(p, err, errs)
|
||||
continue
|
||||
}
|
||||
|
||||
if collections[id] != nil {
|
||||
errs = support.WrapAndAppend(p, errors.New("conflict: tombstone exists for a live collection"), errs)
|
||||
continue
|
||||
@ -168,8 +169,7 @@ func filterContainersAndFillCollections(
|
||||
nil, // marks the collection as deleted
|
||||
prevPath,
|
||||
scope.Category().PathType(),
|
||||
ac,
|
||||
service,
|
||||
ibt,
|
||||
statusUpdater,
|
||||
ctrlOpts,
|
||||
false)
|
||||
@ -221,3 +221,16 @@ func pathFromPrevString(ps string) (path.Path, error) {
|
||||
|
||||
return p, nil
|
||||
}
|
||||
|
||||
func itemerByType(ac api.Client, category path.CategoryType) (itemer, error) {
|
||||
switch category {
|
||||
case path.EmailCategory:
|
||||
return ac.Mail(), nil
|
||||
case path.EventsCategory:
|
||||
return ac.Events(), nil
|
||||
case path.ContactsCategory:
|
||||
return ac.Contacts(), nil
|
||||
default:
|
||||
return nil, fmt.Errorf("category %s not supported by getFetchIDFunc", category)
|
||||
}
|
||||
}
|
||||
|
||||
@ -333,8 +333,160 @@ func (suite *ServiceIteratorsSuite) TestFilterContainersAndFillCollections() {
|
||||
exColl, ok := coll.(*Collection)
|
||||
require.True(t, ok, "collection is an *exchange.Collection")
|
||||
|
||||
assert.ElementsMatch(t, expect.added, exColl.added, "added items")
|
||||
assert.ElementsMatch(t, expect.removed, exColl.removed, "removed items")
|
||||
ids := [][]string{
|
||||
make([]string, 0, len(exColl.added)),
|
||||
make([]string, 0, len(exColl.removed)),
|
||||
}
|
||||
|
||||
for i, cIDs := range []map[string]struct{}{exColl.added, exColl.removed} {
|
||||
for id := range cIDs {
|
||||
ids[i] = append(ids[i], id)
|
||||
}
|
||||
}
|
||||
|
||||
assert.ElementsMatch(t, expect.added, ids[0], "added items")
|
||||
assert.ElementsMatch(t, expect.removed, ids[1], "removed items")
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func (suite *ServiceIteratorsSuite) TestFilterContainersAndFillCollections_repeatedItems() {
|
||||
newDelta := api.DeltaUpdate{URL: "delta_url"}
|
||||
|
||||
table := []struct {
|
||||
name string
|
||||
getter mockGetter
|
||||
expectAdded map[string]struct{}
|
||||
expectRemoved map[string]struct{}
|
||||
}{
|
||||
{
|
||||
name: "repeated adds",
|
||||
getter: map[string]mockGetterResults{
|
||||
"1": {
|
||||
added: []string{"a1", "a2", "a3", "a1"},
|
||||
newDelta: newDelta,
|
||||
},
|
||||
},
|
||||
expectAdded: map[string]struct{}{
|
||||
"a1": {},
|
||||
"a2": {},
|
||||
"a3": {},
|
||||
},
|
||||
expectRemoved: map[string]struct{}{},
|
||||
},
|
||||
{
|
||||
name: "repeated removes",
|
||||
getter: map[string]mockGetterResults{
|
||||
"1": {
|
||||
removed: []string{"r1", "r2", "r3", "r1"},
|
||||
newDelta: newDelta,
|
||||
},
|
||||
},
|
||||
expectAdded: map[string]struct{}{},
|
||||
expectRemoved: map[string]struct{}{
|
||||
"r1": {},
|
||||
"r2": {},
|
||||
"r3": {},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "remove for same item wins",
|
||||
getter: map[string]mockGetterResults{
|
||||
"1": {
|
||||
added: []string{"i1", "a2", "a3"},
|
||||
removed: []string{"i1", "r2", "r3"},
|
||||
newDelta: newDelta,
|
||||
},
|
||||
},
|
||||
expectAdded: map[string]struct{}{
|
||||
"a2": {},
|
||||
"a3": {},
|
||||
},
|
||||
expectRemoved: map[string]struct{}{
|
||||
"i1": {},
|
||||
"r2": {},
|
||||
"r3": {},
|
||||
},
|
||||
},
|
||||
}
|
||||
for _, test := range table {
|
||||
suite.T().Run(test.name, func(t *testing.T) {
|
||||
ctx, flush := tester.NewContext()
|
||||
defer flush()
|
||||
|
||||
var (
|
||||
userID = "user_id"
|
||||
qp = graph.QueryParams{
|
||||
Category: path.EmailCategory, // doesn't matter which one we use.
|
||||
ResourceOwner: userID,
|
||||
Credentials: suite.creds,
|
||||
}
|
||||
statusUpdater = func(*support.ConnectorOperationStatus) {}
|
||||
allScope = selectors.NewExchangeBackup(nil).MailFolders(selectors.Any())[0]
|
||||
dps = DeltaPaths{} // incrementals are tested separately
|
||||
container1 = mockContainer{
|
||||
id: strPtr("1"),
|
||||
displayName: strPtr("display_name_1"),
|
||||
p: path.Builder{}.Append("display_name_1"),
|
||||
}
|
||||
resolver = newMockResolver(container1)
|
||||
)
|
||||
|
||||
collections := map[string]data.Collection{}
|
||||
|
||||
err := filterContainersAndFillCollections(
|
||||
ctx,
|
||||
qp,
|
||||
test.getter,
|
||||
collections,
|
||||
statusUpdater,
|
||||
resolver,
|
||||
allScope,
|
||||
dps,
|
||||
control.Options{FailFast: true},
|
||||
)
|
||||
require.NoError(t, err)
|
||||
|
||||
// collection assertions
|
||||
|
||||
deleteds, news, metadatas, doNotMerges := 0, 0, 0, 0
|
||||
for _, c := range collections {
|
||||
if c.FullPath().Service() == path.ExchangeMetadataService {
|
||||
metadatas++
|
||||
continue
|
||||
}
|
||||
|
||||
if c.State() == data.DeletedState {
|
||||
deleteds++
|
||||
}
|
||||
|
||||
if c.State() == data.NewState {
|
||||
news++
|
||||
}
|
||||
|
||||
if c.DoNotMergeItems() {
|
||||
doNotMerges++
|
||||
}
|
||||
}
|
||||
|
||||
assert.Zero(t, deleteds, "deleted collections")
|
||||
assert.Equal(t, 1, news, "new collections")
|
||||
assert.Equal(t, 1, metadatas, "metadata collections")
|
||||
assert.Zero(t, doNotMerges, "doNotMerge collections")
|
||||
|
||||
// items in collections assertions
|
||||
for k := range test.getter {
|
||||
coll := collections[k]
|
||||
if !assert.NotNilf(t, coll, "missing collection for path %s", k) {
|
||||
continue
|
||||
}
|
||||
|
||||
exColl, ok := coll.(*Collection)
|
||||
require.True(t, ok, "collection is an *exchange.Collection")
|
||||
|
||||
assert.Equal(t, test.expectAdded, exColl.added, "added items")
|
||||
assert.Equal(t, test.expectRemoved, exColl.removed, "removed items")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
@ -84,7 +84,10 @@ func RestoreExchangeContact(
|
||||
return nil, errors.New("msgraph contact post fail: REST response not received")
|
||||
}
|
||||
|
||||
return ContactInfo(contact, int64(len(bits))), nil
|
||||
info := api.ContactInfo(contact)
|
||||
info.Size = int64(len(bits))
|
||||
|
||||
return info, nil
|
||||
}
|
||||
|
||||
// RestoreExchangeEvent restores a contact to the @bits byte
|
||||
@ -153,7 +156,10 @@ func RestoreExchangeEvent(
|
||||
}
|
||||
}
|
||||
|
||||
return EventInfo(event, int64(len(bits))), errs
|
||||
info := api.EventInfo(event)
|
||||
info.Size = int64(len(bits))
|
||||
|
||||
return info, errs
|
||||
}
|
||||
|
||||
// RestoreMailMessage utility function to place an exchange.Mail
|
||||
@ -215,7 +221,10 @@ func RestoreMailMessage(
|
||||
}
|
||||
}
|
||||
|
||||
return MessageInfo(clone, int64(len(bits))), nil
|
||||
info := api.MailInfo(clone)
|
||||
info.Size = int64(len(bits))
|
||||
|
||||
return info, nil
|
||||
}
|
||||
|
||||
// attachmentBytes is a helper to retrieve the attachment content from a models.Attachmentable
|
||||
@ -365,7 +374,7 @@ func restoreCollection(
|
||||
user = directory.ResourceOwner()
|
||||
)
|
||||
|
||||
colProgress, closer := observe.CollectionProgress(user, category.String(), directory.Folder())
|
||||
colProgress, closer := observe.CollectionProgress(ctx, user, category.String(), directory.Folder())
|
||||
defer closer()
|
||||
defer close(colProgress)
|
||||
|
||||
|
||||
@ -1,7 +1,9 @@
|
||||
package graph
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/url"
|
||||
"os"
|
||||
|
||||
"github.com/microsoftgraph/msgraph-sdk-go/models/odataerrors"
|
||||
"github.com/pkg/errors"
|
||||
@ -20,6 +22,8 @@ const (
|
||||
errCodeResyncRequired = "ResyncRequired"
|
||||
errCodeSyncFolderNotFound = "ErrorSyncFolderNotFound"
|
||||
errCodeSyncStateNotFound = "SyncStateNotFound"
|
||||
errCodeResourceNotFound = "ResourceNotFound"
|
||||
errCodeMailboxNotEnabledForRESTAPI = "MailboxNotEnabledForRESTAPI"
|
||||
)
|
||||
|
||||
// The folder or item was deleted between the time we identified
|
||||
@ -69,6 +73,10 @@ func asInvalidDelta(err error) bool {
|
||||
return errors.As(err, &e)
|
||||
}
|
||||
|
||||
func IsErrExchangeMailFolderNotFound(err error) bool {
|
||||
return hasErrorCode(err, errCodeResourceNotFound, errCodeMailboxNotEnabledForRESTAPI)
|
||||
}
|
||||
|
||||
// Timeout errors are identified for tracking the need to retry calls.
|
||||
// Other delay errors, like throttling, are already handled by the
|
||||
// graph client's built-in retries.
|
||||
@ -120,6 +128,10 @@ func hasErrorCode(err error, codes ...string) bool {
|
||||
// timeouts as other errors are handled within a middleware in the
|
||||
// client.
|
||||
func isTimeoutErr(err error) bool {
|
||||
if errors.Is(err, context.DeadlineExceeded) || os.IsTimeout(err) {
|
||||
return true
|
||||
}
|
||||
|
||||
switch err := err.(type) {
|
||||
case *url.Error:
|
||||
return err.Timeout()
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
package graph
|
||||
|
||||
import (
|
||||
nethttp "net/http"
|
||||
"net/http"
|
||||
"net/http/httputil"
|
||||
"os"
|
||||
"strings"
|
||||
@ -47,7 +47,7 @@ func CreateAdapter(tenant, client, secret string) (*msgraphsdk.GraphRequestAdapt
|
||||
}
|
||||
|
||||
// CreateHTTPClient creates the httpClient with middlewares and timeout configured
|
||||
func CreateHTTPClient() *nethttp.Client {
|
||||
func CreateHTTPClient() *http.Client {
|
||||
clientOptions := msgraphsdk.GetDefaultClientOptions()
|
||||
middlewares := msgraphgocore.GetDefaultMiddlewaresWithOptions(&clientOptions)
|
||||
middlewares = append(middlewares, &LoggingMiddleware{})
|
||||
@ -67,8 +67,8 @@ type LoggingMiddleware struct{}
|
||||
func (handler *LoggingMiddleware) Intercept(
|
||||
pipeline khttp.Pipeline,
|
||||
middlewareIndex int,
|
||||
req *nethttp.Request,
|
||||
) (*nethttp.Response, error) {
|
||||
req *http.Request,
|
||||
) (*http.Response, error) {
|
||||
var (
|
||||
ctx = req.Context()
|
||||
resp, err = pipeline.Next(req, middlewareIndex)
|
||||
@ -82,6 +82,11 @@ func (handler *LoggingMiddleware) Intercept(
|
||||
return resp, err
|
||||
}
|
||||
|
||||
// special case for supportability: log all throttling cases.
|
||||
if resp.StatusCode == http.StatusTooManyRequests {
|
||||
logger.Ctx(ctx).Infow("graph api throttling", "method", req.Method, "url", req.URL)
|
||||
}
|
||||
|
||||
if logger.DebugAPI || os.Getenv(logGraphRequestsEnvKey) != "" {
|
||||
respDump, _ := httputil.DumpResponse(resp, true)
|
||||
|
||||
|
||||
@ -9,6 +9,7 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/microsoftgraph/msgraph-sdk-go/models"
|
||||
"github.com/spatialcurrent/go-lazy/pkg/lazy"
|
||||
|
||||
"github.com/alcionai/corso/src/internal/connector/graph"
|
||||
"github.com/alcionai/corso/src/internal/connector/support"
|
||||
@ -37,8 +38,7 @@ var (
|
||||
_ data.Collection = &Collection{}
|
||||
_ data.Stream = &Item{}
|
||||
_ data.StreamInfo = &Item{}
|
||||
// TODO(ashmrtn): Uncomment when #1702 is resolved.
|
||||
//_ data.StreamModTime = &Item{}
|
||||
_ data.StreamModTime = &Item{}
|
||||
)
|
||||
|
||||
// Collection represents a set of OneDrive objects retrieved from M365
|
||||
@ -49,7 +49,7 @@ type Collection struct {
|
||||
// represents
|
||||
folderPath path.Path
|
||||
// M365 IDs of file items within this collection
|
||||
driveItems []models.DriveItemable
|
||||
driveItems map[string]models.DriveItemable
|
||||
// M365 ID of the drive this collection was created from
|
||||
driveID string
|
||||
source driveSource
|
||||
@ -79,6 +79,7 @@ func NewCollection(
|
||||
) *Collection {
|
||||
c := &Collection{
|
||||
folderPath: folderPath,
|
||||
driveItems: map[string]models.DriveItemable{},
|
||||
driveID: driveID,
|
||||
source: source,
|
||||
service: service,
|
||||
@ -101,7 +102,7 @@ func NewCollection(
|
||||
// Adds an itemID to the collection
|
||||
// This will make it eligible to be populated
|
||||
func (oc *Collection) Add(item models.DriveItemable) {
|
||||
oc.driveItems = append(oc.driveItems, item)
|
||||
oc.driveItems[*item.GetId()] = item
|
||||
}
|
||||
|
||||
// Items() returns the channel containing M365 Exchange objects
|
||||
@ -157,10 +158,9 @@ func (od *Item) Info() details.ItemInfo {
|
||||
return od.info
|
||||
}
|
||||
|
||||
// TODO(ashmrtn): Uncomment when #1702 is resolved.
|
||||
//func (od *Item) ModTime() time.Time {
|
||||
// return od.info.Modified
|
||||
//}
|
||||
func (od *Item) ModTime() time.Time {
|
||||
return od.info.Modified()
|
||||
}
|
||||
|
||||
// populateItems iterates through items added to the collection
|
||||
// and uses the collection `itemReader` to read the item
|
||||
@ -182,10 +182,10 @@ func (oc *Collection) populateItems(ctx context.Context) {
|
||||
}
|
||||
|
||||
folderProgress, colCloser := observe.ProgressWithCount(
|
||||
ctx,
|
||||
observe.ItemQueueMsg,
|
||||
"/"+parentPathString,
|
||||
int64(len(oc.driveItems)),
|
||||
)
|
||||
int64(len(oc.driveItems)))
|
||||
defer colCloser()
|
||||
defer close(folderProgress)
|
||||
|
||||
@ -252,8 +252,11 @@ func (oc *Collection) populateItems(ctx context.Context) {
|
||||
itemSize = itemInfo.OneDrive.Size
|
||||
}
|
||||
|
||||
progReader, closer := observe.ItemProgress(itemData, observe.ItemBackupMsg, itemName, itemSize)
|
||||
itemReader := lazy.NewLazyReadCloser(func() (io.ReadCloser, error) {
|
||||
progReader, closer := observe.ItemProgress(ctx, itemData, observe.ItemBackupMsg, itemName, itemSize)
|
||||
go closer()
|
||||
return progReader, nil
|
||||
})
|
||||
|
||||
// Item read successfully, add to collection
|
||||
atomic.AddInt64(&itemsRead, 1)
|
||||
@ -262,7 +265,7 @@ func (oc *Collection) populateItems(ctx context.Context) {
|
||||
|
||||
oc.data <- &Item{
|
||||
id: itemName,
|
||||
data: progReader,
|
||||
data: itemReader,
|
||||
info: itemInfo,
|
||||
}
|
||||
folderProgress <- struct{}{}
|
||||
@ -287,6 +290,6 @@ func (oc *Collection) reportAsCompleted(ctx context.Context, itemsRead int, byte
|
||||
errs,
|
||||
oc.folderPath.Folder(), // Additional details
|
||||
)
|
||||
logger.Ctx(ctx).Debug(status.String())
|
||||
logger.Ctx(ctx).Debugw("done streaming items", "status", status.String())
|
||||
oc.statusUpdater(status)
|
||||
}
|
||||
|
||||
@ -7,6 +7,7 @@ import (
|
||||
"io"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
absser "github.com/microsoft/kiota-abstractions-go/serialization"
|
||||
msgraphsdk "github.com/microsoftgraph/msgraph-sdk-go"
|
||||
@ -63,19 +64,22 @@ func (suite *CollectionUnitTestSuite) TestCollection() {
|
||||
testItemID = "fakeItemID"
|
||||
testItemName = "itemName"
|
||||
testItemData = []byte("testdata")
|
||||
now = time.Now()
|
||||
)
|
||||
|
||||
table := []struct {
|
||||
name string
|
||||
numInstances int
|
||||
source driveSource
|
||||
itemReader itemReaderFunc
|
||||
infoFrom func(*testing.T, details.ItemInfo) (string, string)
|
||||
}{
|
||||
{
|
||||
name: "oneDrive",
|
||||
name: "oneDrive, no duplicates",
|
||||
numInstances: 1,
|
||||
source: OneDriveSource,
|
||||
itemReader: func(context.Context, models.DriveItemable) (details.ItemInfo, io.ReadCloser, error) {
|
||||
return details.ItemInfo{OneDrive: &details.OneDriveInfo{ItemName: testItemName}},
|
||||
return details.ItemInfo{OneDrive: &details.OneDriveInfo{ItemName: testItemName, Modified: now}},
|
||||
io.NopCloser(bytes.NewReader(testItemData)),
|
||||
nil
|
||||
},
|
||||
@ -85,10 +89,39 @@ func (suite *CollectionUnitTestSuite) TestCollection() {
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "sharePoint",
|
||||
name: "oneDrive, duplicates",
|
||||
numInstances: 3,
|
||||
source: OneDriveSource,
|
||||
itemReader: func(context.Context, models.DriveItemable) (details.ItemInfo, io.ReadCloser, error) {
|
||||
return details.ItemInfo{OneDrive: &details.OneDriveInfo{ItemName: testItemName, Modified: now}},
|
||||
io.NopCloser(bytes.NewReader(testItemData)),
|
||||
nil
|
||||
},
|
||||
infoFrom: func(t *testing.T, dii details.ItemInfo) (string, string) {
|
||||
require.NotNil(t, dii.OneDrive)
|
||||
return dii.OneDrive.ItemName, dii.OneDrive.ParentPath
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "sharePoint, no duplicates",
|
||||
numInstances: 1,
|
||||
source: SharePointSource,
|
||||
itemReader: func(context.Context, models.DriveItemable) (details.ItemInfo, io.ReadCloser, error) {
|
||||
return details.ItemInfo{SharePoint: &details.SharePointInfo{ItemName: testItemName}},
|
||||
return details.ItemInfo{SharePoint: &details.SharePointInfo{ItemName: testItemName, Modified: now}},
|
||||
io.NopCloser(bytes.NewReader(testItemData)),
|
||||
nil
|
||||
},
|
||||
infoFrom: func(t *testing.T, dii details.ItemInfo) (string, string) {
|
||||
require.NotNil(t, dii.SharePoint)
|
||||
return dii.SharePoint.ItemName, dii.SharePoint.ParentPath
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "sharePoint, duplicates",
|
||||
numInstances: 3,
|
||||
source: SharePointSource,
|
||||
itemReader: func(context.Context, models.DriveItemable) (details.ItemInfo, io.ReadCloser, error) {
|
||||
return details.ItemInfo{SharePoint: &details.SharePointInfo{ItemName: testItemName, Modified: now}},
|
||||
io.NopCloser(bytes.NewReader(testItemData)),
|
||||
nil
|
||||
},
|
||||
@ -124,7 +157,11 @@ func (suite *CollectionUnitTestSuite) TestCollection() {
|
||||
// Set a item reader, add an item and validate we get the item back
|
||||
mockItem := models.NewDriveItem()
|
||||
mockItem.SetId(&testItemID)
|
||||
|
||||
for i := 0; i < test.numInstances; i++ {
|
||||
coll.Add(mockItem)
|
||||
}
|
||||
|
||||
coll.itemReader = test.itemReader
|
||||
|
||||
// Read items from the collection
|
||||
@ -146,6 +183,11 @@ func (suite *CollectionUnitTestSuite) TestCollection() {
|
||||
readItemInfo := readItem.(data.StreamInfo)
|
||||
|
||||
assert.Equal(t, testItemName, readItem.UUID())
|
||||
|
||||
require.Implements(t, (*data.StreamModTime)(nil), readItem)
|
||||
mt := readItem.(data.StreamModTime)
|
||||
assert.Equal(t, now, mt.ModTime())
|
||||
|
||||
readData, err := io.ReadAll(readItem.ToReader())
|
||||
require.NoError(t, err)
|
||||
|
||||
|
||||
@ -25,6 +25,17 @@ const (
|
||||
SharePointSource
|
||||
)
|
||||
|
||||
func (ds driveSource) toPathServiceCat() (path.ServiceType, path.CategoryType) {
|
||||
switch ds {
|
||||
case OneDriveSource:
|
||||
return path.OneDriveService, path.FilesCategory
|
||||
case SharePointSource:
|
||||
return path.SharePointService, path.LibrariesCategory
|
||||
default:
|
||||
return path.UnknownService, path.UnknownCategory
|
||||
}
|
||||
}
|
||||
|
||||
type folderMatcher interface {
|
||||
IsAny() bool
|
||||
Matches(string) bool
|
||||
@ -81,27 +92,80 @@ func (c *Collections) Get(ctx context.Context) ([]data.Collection, error) {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var (
|
||||
// Drive ID -> delta URL for drive
|
||||
deltaURLs = map[string]string{}
|
||||
// Drive ID -> folder ID -> folder path
|
||||
folderPaths = map[string]map[string]string{}
|
||||
)
|
||||
|
||||
// Update the collection map with items from each drive
|
||||
for _, d := range drives {
|
||||
err = collectItems(ctx, c.service, *d.GetId(), c.UpdateCollections)
|
||||
driveID := *d.GetId()
|
||||
|
||||
delta, paths, err := collectItems(ctx, c.service, driveID, c.UpdateCollections)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if len(delta) > 0 {
|
||||
deltaURLs[driveID] = delta
|
||||
}
|
||||
|
||||
observe.Message(fmt.Sprintf("Discovered %d items to backup", c.NumItems))
|
||||
if len(paths) > 0 {
|
||||
folderPaths[driveID] = map[string]string{}
|
||||
|
||||
collections := make([]data.Collection, 0, len(c.CollectionMap))
|
||||
for id, p := range paths {
|
||||
folderPaths[driveID][id] = p
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
observe.Message(ctx, fmt.Sprintf("Discovered %d items to backup", c.NumItems))
|
||||
|
||||
// Add an extra for the metadata collection.
|
||||
collections := make([]data.Collection, 0, len(c.CollectionMap)+1)
|
||||
for _, coll := range c.CollectionMap {
|
||||
collections = append(collections, coll)
|
||||
}
|
||||
|
||||
service, category := c.source.toPathServiceCat()
|
||||
metadata, err := graph.MakeMetadataCollection(
|
||||
c.tenant,
|
||||
c.resourceOwner,
|
||||
service,
|
||||
category,
|
||||
[]graph.MetadataCollectionEntry{
|
||||
graph.NewMetadataEntry(graph.PreviousPathFileName, folderPaths),
|
||||
graph.NewMetadataEntry(graph.DeltaURLsFileName, deltaURLs),
|
||||
},
|
||||
c.statusUpdater,
|
||||
)
|
||||
|
||||
if err != nil {
|
||||
// Technically it's safe to continue here because the logic for starting an
|
||||
// incremental backup should eventually find that the metadata files are
|
||||
// empty/missing and default to a full backup.
|
||||
logger.Ctx(ctx).Warnw(
|
||||
"making metadata collection for future incremental backups",
|
||||
"error",
|
||||
err,
|
||||
)
|
||||
} else {
|
||||
collections = append(collections, metadata)
|
||||
}
|
||||
|
||||
return collections, nil
|
||||
}
|
||||
|
||||
// UpdateCollections initializes and adds the provided drive items to Collections
|
||||
// A new collection is created for every drive folder (or package)
|
||||
func (c *Collections) UpdateCollections(ctx context.Context, driveID string, items []models.DriveItemable) error {
|
||||
func (c *Collections) UpdateCollections(
|
||||
ctx context.Context,
|
||||
driveID string,
|
||||
items []models.DriveItemable,
|
||||
paths map[string]string,
|
||||
) error {
|
||||
for _, item := range items {
|
||||
if item.GetRoot() != nil {
|
||||
// Skip the root item
|
||||
@ -131,9 +195,19 @@ func (c *Collections) UpdateCollections(ctx context.Context, driveID string, ite
|
||||
|
||||
switch {
|
||||
case item.GetFolder() != nil, item.GetPackage() != nil:
|
||||
// Leave this here so we don't fall into the default case.
|
||||
// TODO: This is where we might create a "special file" to represent these in the backup repository
|
||||
// e.g. a ".folderMetadataFile"
|
||||
// Eventually, deletions of folders will be handled here so we may as well
|
||||
// start off by saving the path.Path of the item instead of just the
|
||||
// OneDrive parentRef or such.
|
||||
folderPath, err := collectionPath.Append(*item.GetName(), false)
|
||||
if err != nil {
|
||||
logger.Ctx(ctx).Errorw("failed building collection path", "error", err)
|
||||
return err
|
||||
}
|
||||
|
||||
// TODO(ashmrtn): Handle deletions by removing this entry from the map.
|
||||
// TODO(ashmrtn): Handle moves by setting the collection state if the
|
||||
// collection doesn't already exist/have that state.
|
||||
paths[*item.GetId()] = folderPath.String()
|
||||
|
||||
case item.GetFile() != nil:
|
||||
col, found := c.CollectionMap[collectionPath.String()]
|
||||
|
||||
@ -102,19 +102,21 @@ func (suite *OneDriveCollectionsSuite) TestUpdateCollections() {
|
||||
expectedItemCount int
|
||||
expectedContainerCount int
|
||||
expectedFileCount int
|
||||
expectedMetadataPaths map[string]string
|
||||
}{
|
||||
{
|
||||
testCase: "Invalid item",
|
||||
items: []models.DriveItemable{
|
||||
driveItem("item", testBaseDrivePath, false, false, false),
|
||||
driveItem("item", "item", testBaseDrivePath, false, false, false),
|
||||
},
|
||||
scope: anyFolder,
|
||||
expect: assert.Error,
|
||||
expectedMetadataPaths: map[string]string{},
|
||||
},
|
||||
{
|
||||
testCase: "Single File",
|
||||
items: []models.DriveItemable{
|
||||
driveItem("file", testBaseDrivePath, true, false, false),
|
||||
driveItem("file", "file", testBaseDrivePath, true, false, false),
|
||||
},
|
||||
scope: anyFolder,
|
||||
expect: assert.NoError,
|
||||
@ -127,33 +129,51 @@ func (suite *OneDriveCollectionsSuite) TestUpdateCollections() {
|
||||
expectedItemCount: 2,
|
||||
expectedFileCount: 1,
|
||||
expectedContainerCount: 1,
|
||||
// Root folder is skipped since it's always present.
|
||||
expectedMetadataPaths: map[string]string{},
|
||||
},
|
||||
{
|
||||
testCase: "Single Folder",
|
||||
items: []models.DriveItemable{
|
||||
driveItem("folder", testBaseDrivePath, false, true, false),
|
||||
driveItem("folder", "folder", testBaseDrivePath, false, true, false),
|
||||
},
|
||||
scope: anyFolder,
|
||||
expect: assert.NoError,
|
||||
expectedCollectionPaths: []string{},
|
||||
expectedMetadataPaths: map[string]string{
|
||||
"folder": expectedPathAsSlice(
|
||||
suite.T(),
|
||||
tenant,
|
||||
user,
|
||||
testBaseDrivePath+"/folder",
|
||||
)[0],
|
||||
},
|
||||
},
|
||||
{
|
||||
testCase: "Single Package",
|
||||
items: []models.DriveItemable{
|
||||
driveItem("package", testBaseDrivePath, false, false, true),
|
||||
driveItem("package", "package", testBaseDrivePath, false, false, true),
|
||||
},
|
||||
scope: anyFolder,
|
||||
expect: assert.NoError,
|
||||
expectedCollectionPaths: []string{},
|
||||
expectedMetadataPaths: map[string]string{
|
||||
"package": expectedPathAsSlice(
|
||||
suite.T(),
|
||||
tenant,
|
||||
user,
|
||||
testBaseDrivePath+"/package",
|
||||
)[0],
|
||||
},
|
||||
},
|
||||
{
|
||||
testCase: "1 root file, 1 folder, 1 package, 2 files, 3 collections",
|
||||
items: []models.DriveItemable{
|
||||
driveItem("fileInRoot", testBaseDrivePath, true, false, false),
|
||||
driveItem("folder", testBaseDrivePath, false, true, false),
|
||||
driveItem("package", testBaseDrivePath, false, false, true),
|
||||
driveItem("fileInFolder", testBaseDrivePath+folder, true, false, false),
|
||||
driveItem("fileInPackage", testBaseDrivePath+pkg, true, false, false),
|
||||
driveItem("fileInRoot", "fileInRoot", testBaseDrivePath, true, false, false),
|
||||
driveItem("folder", "folder", testBaseDrivePath, false, true, false),
|
||||
driveItem("package", "package", testBaseDrivePath, false, false, true),
|
||||
driveItem("fileInFolder", "fileInFolder", testBaseDrivePath+folder, true, false, false),
|
||||
driveItem("fileInPackage", "fileInPackage", testBaseDrivePath+pkg, true, false, false),
|
||||
},
|
||||
scope: anyFolder,
|
||||
expect: assert.NoError,
|
||||
@ -168,18 +188,32 @@ func (suite *OneDriveCollectionsSuite) TestUpdateCollections() {
|
||||
expectedItemCount: 6,
|
||||
expectedFileCount: 3,
|
||||
expectedContainerCount: 3,
|
||||
expectedMetadataPaths: map[string]string{
|
||||
"folder": expectedPathAsSlice(
|
||||
suite.T(),
|
||||
tenant,
|
||||
user,
|
||||
testBaseDrivePath+"/folder",
|
||||
)[0],
|
||||
"package": expectedPathAsSlice(
|
||||
suite.T(),
|
||||
tenant,
|
||||
user,
|
||||
testBaseDrivePath+"/package",
|
||||
)[0],
|
||||
},
|
||||
},
|
||||
{
|
||||
testCase: "contains folder selector",
|
||||
items: []models.DriveItemable{
|
||||
driveItem("fileInRoot", testBaseDrivePath, true, false, false),
|
||||
driveItem("folder", testBaseDrivePath, false, true, false),
|
||||
driveItem("subfolder", testBaseDrivePath+folder, false, true, false),
|
||||
driveItem("folder", testBaseDrivePath+folderSub, false, true, false),
|
||||
driveItem("package", testBaseDrivePath, false, false, true),
|
||||
driveItem("fileInFolder", testBaseDrivePath+folder, true, false, false),
|
||||
driveItem("fileInFolder2", testBaseDrivePath+folderSub+folder, true, false, false),
|
||||
driveItem("fileInPackage", testBaseDrivePath+pkg, true, false, false),
|
||||
driveItem("fileInRoot", "fileInRoot", testBaseDrivePath, true, false, false),
|
||||
driveItem("folder", "folder", testBaseDrivePath, false, true, false),
|
||||
driveItem("subfolder", "subfolder", testBaseDrivePath+folder, false, true, false),
|
||||
driveItem("folder2", "folder", testBaseDrivePath+folderSub, false, true, false),
|
||||
driveItem("package", "package", testBaseDrivePath, false, false, true),
|
||||
driveItem("fileInFolder", "fileInFolder", testBaseDrivePath+folder, true, false, false),
|
||||
driveItem("fileInFolder2", "fileInFolder2", testBaseDrivePath+folderSub+folder, true, false, false),
|
||||
driveItem("fileInFolderPackage", "fileInPackage", testBaseDrivePath+pkg, true, false, false),
|
||||
},
|
||||
scope: (&selectors.OneDriveBackup{}).Folders([]string{"folder"})[0],
|
||||
expect: assert.NoError,
|
||||
@ -200,18 +234,34 @@ func (suite *OneDriveCollectionsSuite) TestUpdateCollections() {
|
||||
expectedItemCount: 4,
|
||||
expectedFileCount: 2,
|
||||
expectedContainerCount: 2,
|
||||
// just "folder" isn't added here because the include check is done on the
|
||||
// parent path since we only check later if something is a folder or not.
|
||||
expectedMetadataPaths: map[string]string{
|
||||
"subfolder": expectedPathAsSlice(
|
||||
suite.T(),
|
||||
tenant,
|
||||
user,
|
||||
testBaseDrivePath+"/folder/subfolder",
|
||||
)[0],
|
||||
"folder2": expectedPathAsSlice(
|
||||
suite.T(),
|
||||
tenant,
|
||||
user,
|
||||
testBaseDrivePath+"/folder/subfolder/folder",
|
||||
)[0],
|
||||
},
|
||||
},
|
||||
{
|
||||
testCase: "prefix subfolder selector",
|
||||
items: []models.DriveItemable{
|
||||
driveItem("fileInRoot", testBaseDrivePath, true, false, false),
|
||||
driveItem("folder", testBaseDrivePath, false, true, false),
|
||||
driveItem("subfolder", testBaseDrivePath+folder, false, true, false),
|
||||
driveItem("folder", testBaseDrivePath+folderSub, false, true, false),
|
||||
driveItem("package", testBaseDrivePath, false, false, true),
|
||||
driveItem("fileInFolder", testBaseDrivePath+folder, true, false, false),
|
||||
driveItem("fileInFolder2", testBaseDrivePath+folderSub+folder, true, false, false),
|
||||
driveItem("fileInPackage", testBaseDrivePath+pkg, true, false, false),
|
||||
driveItem("fileInRoot", "fileInRoot", testBaseDrivePath, true, false, false),
|
||||
driveItem("folder", "folder", testBaseDrivePath, false, true, false),
|
||||
driveItem("subfolder", "subfolder", testBaseDrivePath+folder, false, true, false),
|
||||
driveItem("folder", "folder", testBaseDrivePath+folderSub, false, true, false),
|
||||
driveItem("package", "package", testBaseDrivePath, false, false, true),
|
||||
driveItem("fileInFolder", "fileInFolder", testBaseDrivePath+folder, true, false, false),
|
||||
driveItem("fileInFolder2", "fileInFolder2", testBaseDrivePath+folderSub+folder, true, false, false),
|
||||
driveItem("fileInPackage", "fileInPackage", testBaseDrivePath+pkg, true, false, false),
|
||||
},
|
||||
scope: (&selectors.OneDriveBackup{}).
|
||||
Folders([]string{"/folder/subfolder"}, selectors.PrefixMatch())[0],
|
||||
@ -225,17 +275,25 @@ func (suite *OneDriveCollectionsSuite) TestUpdateCollections() {
|
||||
expectedItemCount: 2,
|
||||
expectedFileCount: 1,
|
||||
expectedContainerCount: 1,
|
||||
expectedMetadataPaths: map[string]string{
|
||||
"folder": expectedPathAsSlice(
|
||||
suite.T(),
|
||||
tenant,
|
||||
user,
|
||||
testBaseDrivePath+"/folder/subfolder/folder",
|
||||
)[0],
|
||||
},
|
||||
},
|
||||
{
|
||||
testCase: "match subfolder selector",
|
||||
items: []models.DriveItemable{
|
||||
driveItem("fileInRoot", testBaseDrivePath, true, false, false),
|
||||
driveItem("folder", testBaseDrivePath, false, true, false),
|
||||
driveItem("subfolder", testBaseDrivePath+folder, false, true, false),
|
||||
driveItem("package", testBaseDrivePath, false, false, true),
|
||||
driveItem("fileInFolder", testBaseDrivePath+folder, true, false, false),
|
||||
driveItem("fileInSubfolder", testBaseDrivePath+folderSub, true, false, false),
|
||||
driveItem("fileInPackage", testBaseDrivePath+pkg, true, false, false),
|
||||
driveItem("fileInRoot", "fileInRoot", testBaseDrivePath, true, false, false),
|
||||
driveItem("folder", "folder", testBaseDrivePath, false, true, false),
|
||||
driveItem("subfolder", "subfolder", testBaseDrivePath+folder, false, true, false),
|
||||
driveItem("package", "package", testBaseDrivePath, false, false, true),
|
||||
driveItem("fileInFolder", "fileInFolder", testBaseDrivePath+folder, true, false, false),
|
||||
driveItem("fileInSubfolder", "fileInSubfolder", testBaseDrivePath+folderSub, true, false, false),
|
||||
driveItem("fileInPackage", "fileInPackage", testBaseDrivePath+pkg, true, false, false),
|
||||
},
|
||||
scope: (&selectors.OneDriveBackup{}).Folders([]string{"folder/subfolder"})[0],
|
||||
expect: assert.NoError,
|
||||
@ -248,6 +306,8 @@ func (suite *OneDriveCollectionsSuite) TestUpdateCollections() {
|
||||
expectedItemCount: 2,
|
||||
expectedFileCount: 1,
|
||||
expectedContainerCount: 1,
|
||||
// No child folders for subfolder so nothing here.
|
||||
expectedMetadataPaths: map[string]string{},
|
||||
},
|
||||
}
|
||||
|
||||
@ -256,6 +316,7 @@ func (suite *OneDriveCollectionsSuite) TestUpdateCollections() {
|
||||
ctx, flush := tester.NewContext()
|
||||
defer flush()
|
||||
|
||||
paths := map[string]string{}
|
||||
c := NewCollections(
|
||||
tenant,
|
||||
user,
|
||||
@ -265,7 +326,7 @@ func (suite *OneDriveCollectionsSuite) TestUpdateCollections() {
|
||||
nil,
|
||||
control.Options{})
|
||||
|
||||
err := c.UpdateCollections(ctx, "driveID", tt.items)
|
||||
err := c.UpdateCollections(ctx, "driveID", tt.items, paths)
|
||||
tt.expect(t, err)
|
||||
assert.Equal(t, len(tt.expectedCollectionPaths), len(c.CollectionMap), "collection paths")
|
||||
assert.Equal(t, tt.expectedItemCount, c.NumItems, "item count")
|
||||
@ -274,14 +335,16 @@ func (suite *OneDriveCollectionsSuite) TestUpdateCollections() {
|
||||
for _, collPath := range tt.expectedCollectionPaths {
|
||||
assert.Contains(t, c.CollectionMap, collPath)
|
||||
}
|
||||
|
||||
assert.Equal(t, tt.expectedMetadataPaths, paths)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func driveItem(name string, path string, isFile, isFolder, isPackage bool) models.DriveItemable {
|
||||
func driveItem(id string, name string, path string, isFile, isFolder, isPackage bool) models.DriveItemable {
|
||||
item := models.NewDriveItem()
|
||||
item.SetName(&name)
|
||||
item.SetId(&name)
|
||||
item.SetId(&id)
|
||||
|
||||
parentReference := models.NewItemReference()
|
||||
parentReference.SetPath(&path)
|
||||
|
||||
@ -54,6 +54,18 @@ var (
|
||||
"afcafa6a-d966-4462-918c-ec0b4e0fe642",
|
||||
// Microsoft 365 E5 Developer
|
||||
"c42b9cae-ea4f-4ab7-9717-81576235ccac",
|
||||
// Microsoft 365 E5
|
||||
"06ebc4ee-1bb5-47dd-8120-11324bc54e06",
|
||||
// Office 365 E4
|
||||
"1392051d-0cb9-4b7a-88d5-621fee5e8711",
|
||||
// Microsoft 365 E3
|
||||
"05e9a617-0261-4cee-bb44-138d3ef5d965",
|
||||
// Microsoft 365 Business Premium
|
||||
"cbdc14ab-d96c-4c30-b9f4-6ada7cdc1d46",
|
||||
// Microsoft 365 Business Standard
|
||||
"f245ecc8-75af-4f8e-b61f-27d8114de5f3",
|
||||
// Microsoft 365 Business Basic
|
||||
"3b555118-da6a-4418-894f-7df1e2096870",
|
||||
}
|
||||
)
|
||||
|
||||
@ -149,7 +161,12 @@ func userDrives(ctx context.Context, service graph.Servicer, user string) ([]mod
|
||||
}
|
||||
|
||||
// itemCollector functions collect the items found in a drive
|
||||
type itemCollector func(ctx context.Context, driveID string, driveItems []models.DriveItemable) error
|
||||
type itemCollector func(
|
||||
ctx context.Context,
|
||||
driveID string,
|
||||
driveItems []models.DriveItemable,
|
||||
paths map[string]string,
|
||||
) error
|
||||
|
||||
// collectItems will enumerate all items in the specified drive and hand them to the
|
||||
// provided `collector` method
|
||||
@ -158,7 +175,14 @@ func collectItems(
|
||||
service graph.Servicer,
|
||||
driveID string,
|
||||
collector itemCollector,
|
||||
) error {
|
||||
) (string, map[string]string, error) {
|
||||
var (
|
||||
newDeltaURL = ""
|
||||
// TODO(ashmrtn): Eventually this should probably be a parameter so we can
|
||||
// take in previous paths.
|
||||
paths = map[string]string{}
|
||||
)
|
||||
|
||||
// TODO: Specify a timestamp in the delta query
|
||||
// https://docs.microsoft.com/en-us/graph/api/driveitem-delta?
|
||||
// view=graph-rest-1.0&tabs=http#example-4-retrieving-delta-results-using-a-timestamp
|
||||
@ -188,16 +212,20 @@ func collectItems(
|
||||
for {
|
||||
r, err := builder.Get(ctx, requestConfig)
|
||||
if err != nil {
|
||||
return errors.Wrapf(
|
||||
return "", nil, errors.Wrapf(
|
||||
err,
|
||||
"failed to query drive items. details: %s",
|
||||
support.ConnectorStackErrorTrace(err),
|
||||
)
|
||||
}
|
||||
|
||||
err = collector(ctx, driveID, r.GetValue())
|
||||
err = collector(ctx, driveID, r.GetValue(), paths)
|
||||
if err != nil {
|
||||
return err
|
||||
return "", nil, err
|
||||
}
|
||||
|
||||
if r.GetOdataDeltaLink() != nil && len(*r.GetOdataDeltaLink()) > 0 {
|
||||
newDeltaURL = *r.GetOdataDeltaLink()
|
||||
}
|
||||
|
||||
// Check if there are more items
|
||||
@ -210,7 +238,7 @@ func collectItems(
|
||||
builder = msdrives.NewItemRootDeltaRequestBuilder(*nextLink, service.Adapter())
|
||||
}
|
||||
|
||||
return nil
|
||||
return newDeltaURL, paths, nil
|
||||
}
|
||||
|
||||
// getFolder will lookup the specified folder name under `parentFolderID`
|
||||
@ -317,11 +345,16 @@ func GetAllFolders(
|
||||
folders := map[string]*Displayable{}
|
||||
|
||||
for _, d := range drives {
|
||||
err = collectItems(
|
||||
_, _, err = collectItems(
|
||||
ctx,
|
||||
gs,
|
||||
*d.GetId(),
|
||||
func(innerCtx context.Context, driveID string, items []models.DriveItemable) error {
|
||||
func(
|
||||
innerCtx context.Context,
|
||||
driveID string,
|
||||
items []models.DriveItemable,
|
||||
paths map[string]string,
|
||||
) error {
|
||||
for _, item := range items {
|
||||
// Skip the root item.
|
||||
if item.GetRoot() != nil {
|
||||
|
||||
@ -4,6 +4,7 @@ import (
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"strings"
|
||||
|
||||
msdrives "github.com/microsoftgraph/msgraph-sdk-go/drives"
|
||||
"github.com/microsoftgraph/msgraph-sdk-go/models"
|
||||
@ -128,10 +129,11 @@ func oneDriveItemInfo(di models.DriveItemable, itemSize int64) *details.OneDrive
|
||||
// separately for restore processes because the local itemable
|
||||
// doesn't have its size value updated as a side effect of creation,
|
||||
// and kiota drops any SetSize update.
|
||||
// TODO: Update drive name during Issue #2071
|
||||
func sharePointItemInfo(di models.DriveItemable, itemSize int64) *details.SharePointInfo {
|
||||
var (
|
||||
id string
|
||||
url string
|
||||
id, parent, url string
|
||||
reference = di.GetParentReference()
|
||||
)
|
||||
|
||||
// TODO: we rely on this info for details/restore lookups,
|
||||
@ -148,11 +150,26 @@ func sharePointItemInfo(di models.DriveItemable, itemSize int64) *details.ShareP
|
||||
}
|
||||
}
|
||||
|
||||
if reference != nil {
|
||||
parent = *reference.GetDriveId()
|
||||
|
||||
if reference.GetName() != nil {
|
||||
// EndPoint is not always populated from external apps
|
||||
temp := *reference.GetName()
|
||||
temp = strings.TrimSpace(temp)
|
||||
|
||||
if temp != "" {
|
||||
parent = temp
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return &details.SharePointInfo{
|
||||
ItemType: details.OneDriveItem,
|
||||
ItemName: *di.GetName(),
|
||||
Created: *di.GetCreatedDateTime(),
|
||||
Modified: *di.GetLastModifiedDateTime(),
|
||||
DriveName: parent,
|
||||
Size: itemSize,
|
||||
Owner: id,
|
||||
WebURL: url,
|
||||
|
||||
@ -100,7 +100,12 @@ func (suite *ItemIntegrationSuite) TestItemReader_oneDrive() {
|
||||
|
||||
var driveItem models.DriveItemable
|
||||
// This item collector tries to find "a" drive item that is a file to test the reader function
|
||||
itemCollector := func(ctx context.Context, driveID string, items []models.DriveItemable) error {
|
||||
itemCollector := func(
|
||||
ctx context.Context,
|
||||
driveID string,
|
||||
items []models.DriveItemable,
|
||||
paths map[string]string,
|
||||
) error {
|
||||
for _, item := range items {
|
||||
if item.GetFile() != nil {
|
||||
driveItem = item
|
||||
@ -110,7 +115,7 @@ func (suite *ItemIntegrationSuite) TestItemReader_oneDrive() {
|
||||
|
||||
return nil
|
||||
}
|
||||
err := collectItems(ctx, suite, suite.userDriveID, itemCollector)
|
||||
_, _, err := collectItems(ctx, suite, suite.userDriveID, itemCollector)
|
||||
require.NoError(suite.T(), err)
|
||||
|
||||
// Test Requirement 2: Need a file
|
||||
|
||||
@ -99,7 +99,10 @@ func RestoreCollection(
|
||||
restoreFolderElements = append(restoreFolderElements, drivePath.Folders...)
|
||||
|
||||
trace.Log(ctx, "gc:oneDrive:restoreCollection", directory.String())
|
||||
logger.Ctx(ctx).Debugf("Restore target for %s is %v", dc.FullPath(), restoreFolderElements)
|
||||
logger.Ctx(ctx).Infow(
|
||||
"restoring to destination",
|
||||
"origin", dc.FullPath().Folder(),
|
||||
"destination", restoreFolderElements)
|
||||
|
||||
// Create restore folders and get the folder ID of the folder the data stream will be restored in
|
||||
restoreFolderID, err := CreateRestoreFolders(ctx, service, drivePath.DriveID, restoreFolderElements)
|
||||
@ -195,7 +198,11 @@ func CreateRestoreFolders(ctx context.Context, service graph.Servicer, driveID s
|
||||
)
|
||||
}
|
||||
|
||||
logger.Ctx(ctx).Debugf("Resolved %s in %s to %s", folder, parentFolderID, *folderItem.GetId())
|
||||
logger.Ctx(ctx).Debugw("resolved restore destination",
|
||||
"dest_name", folder,
|
||||
"parent", parentFolderID,
|
||||
"dest_id", *folderItem.GetId())
|
||||
|
||||
parentFolderID = *folderItem.GetId()
|
||||
}
|
||||
|
||||
@ -236,7 +243,7 @@ func restoreItem(
|
||||
}
|
||||
|
||||
iReader := itemData.ToReader()
|
||||
progReader, closer := observe.ItemProgress(iReader, observe.ItemRestoreMsg, itemName, ss.Size())
|
||||
progReader, closer := observe.ItemProgress(ctx, iReader, observe.ItemRestoreMsg, itemName, ss.Size())
|
||||
|
||||
go closer()
|
||||
|
||||
|
||||
@ -156,7 +156,11 @@ func (sc *Collection) populate(ctx context.Context) {
|
||||
)
|
||||
|
||||
// TODO: Insert correct ID for CollectionProgress
|
||||
colProgress, closer := observe.CollectionProgress("name", sc.fullPath.Category().String(), sc.fullPath.Folder())
|
||||
colProgress, closer := observe.CollectionProgress(
|
||||
ctx,
|
||||
"name",
|
||||
sc.fullPath.Category().String(),
|
||||
sc.fullPath.Folder())
|
||||
go closer()
|
||||
|
||||
defer func() {
|
||||
|
||||
@ -2,7 +2,6 @@ package sharepoint
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
|
||||
@ -43,8 +42,8 @@ func DataCollections(
|
||||
)
|
||||
|
||||
for _, scope := range b.Scopes() {
|
||||
foldersComplete, closer := observe.MessageWithCompletion(fmt.Sprintf(
|
||||
"∙ %s - %s:",
|
||||
foldersComplete, closer := observe.MessageWithCompletion(ctx, observe.Bulletf(
|
||||
"%s - %s",
|
||||
scope.Category().PathType(), site))
|
||||
defer closer()
|
||||
defer close(foldersComplete)
|
||||
|
||||
@ -104,6 +104,7 @@ func (suite *SharePointLibrariesSuite) TestUpdateCollections() {
|
||||
ctx, flush := tester.NewContext()
|
||||
defer flush()
|
||||
|
||||
paths := map[string]string{}
|
||||
c := onedrive.NewCollections(
|
||||
tenant,
|
||||
site,
|
||||
@ -112,7 +113,7 @@ func (suite *SharePointLibrariesSuite) TestUpdateCollections() {
|
||||
suite.mockService,
|
||||
nil,
|
||||
control.Options{})
|
||||
err := c.UpdateCollections(ctx, "driveID", test.items)
|
||||
err := c.UpdateCollections(ctx, "driveID", test.items, paths)
|
||||
test.expect(t, err)
|
||||
assert.Equal(t, len(test.expectedCollectionPaths), len(c.CollectionMap), "collection paths")
|
||||
assert.Equal(t, test.expectedItemCount, c.NumItems, "item count")
|
||||
|
||||
@ -89,8 +89,20 @@ func concatenateStringFromPointers(orig string, pointers []*string) string {
|
||||
return orig
|
||||
}
|
||||
|
||||
// ConnectorStackErrorTrace is a helper function that wraps the
|
||||
// stack trace for oDataError types from querying the M365 back store.
|
||||
// ConnectorStackErrorTraceWrap is a helper function that wraps the
|
||||
// stack trace for oDataErrors (if the error has one) onto the prefix.
|
||||
// If no stack trace is found, wraps the error with only the prefix.
|
||||
func ConnectorStackErrorTraceWrap(e error, prefix string) error {
|
||||
cset := ConnectorStackErrorTrace(e)
|
||||
if len(cset) > 0 {
|
||||
return errors.Wrap(e, prefix+": "+cset)
|
||||
}
|
||||
|
||||
return errors.Wrap(e, prefix)
|
||||
}
|
||||
|
||||
// ConnectorStackErrorTracew is a helper function that extracts
|
||||
// the stack trace for oDataErrors, if the error has one.
|
||||
func ConnectorStackErrorTrace(e error) string {
|
||||
eMessage := ""
|
||||
|
||||
|
||||
@ -32,5 +32,5 @@ func s3BlobStorage(ctx context.Context, s storage.Storage) (blob.Storage, error)
|
||||
DoNotVerifyTLS: cfg.DoNotVerifyTLS,
|
||||
}
|
||||
|
||||
return s3.New(ctx, &opts)
|
||||
return s3.New(ctx, &opts, false)
|
||||
}
|
||||
|
||||
@ -24,6 +24,9 @@ const (
|
||||
// (permalinks)
|
||||
// [1] https://github.com/kopia/kopia/blob/05e729a7858a6e86cb48ba29fb53cb6045efce2b/cli/command_snapshot_create.go#L169
|
||||
userTagPrefix = "tag:"
|
||||
|
||||
// Tag key applied to checkpoints (but not completed snapshots) in kopia.
|
||||
checkpointTagKey = "checkpoint"
|
||||
)
|
||||
|
||||
type Reason struct {
|
||||
@ -66,30 +69,6 @@ type snapshotManager interface {
|
||||
LoadSnapshots(ctx context.Context, ids []manifest.ID) ([]*snapshot.Manifest, error)
|
||||
}
|
||||
|
||||
type OwnersCats struct {
|
||||
ResourceOwners map[string]struct{}
|
||||
ServiceCats map[string]ServiceCat
|
||||
}
|
||||
|
||||
type ServiceCat struct {
|
||||
Service path.ServiceType
|
||||
Category path.CategoryType
|
||||
}
|
||||
|
||||
// MakeServiceCat produces the expected OwnersCats.ServiceCats key from a
|
||||
// path service and path category, as well as the ServiceCat value.
|
||||
func MakeServiceCat(s path.ServiceType, c path.CategoryType) (string, ServiceCat) {
|
||||
return serviceCatString(s, c), ServiceCat{s, c}
|
||||
}
|
||||
|
||||
// TODO(ashmrtn): Remove in a future PR.
|
||||
//
|
||||
//nolint:unused
|
||||
//lint:ignore U1000 will be removed in future PR.
|
||||
func serviceCatTag(p path.Path) string {
|
||||
return serviceCatString(p.Service(), p.Category())
|
||||
}
|
||||
|
||||
func serviceCatString(s path.ServiceType, c path.CategoryType) string {
|
||||
return s.String() + c.String()
|
||||
}
|
||||
@ -104,33 +83,6 @@ func makeTagKV(k string) (string, string) {
|
||||
return userTagPrefix + k, defaultTagValue
|
||||
}
|
||||
|
||||
// tagsFromStrings returns a map[string]string with tags for all ownersCats
|
||||
// passed in. Currently uses placeholder values for each tag because there can
|
||||
// be multiple instances of resource owners and categories in a single snapshot.
|
||||
// TODO(ashmrtn): Remove in future PR.
|
||||
//
|
||||
//nolint:unused
|
||||
//lint:ignore U1000 will be removed in future PR.
|
||||
func tagsFromStrings(oc *OwnersCats) map[string]string {
|
||||
if oc == nil {
|
||||
return map[string]string{}
|
||||
}
|
||||
|
||||
res := make(map[string]string, len(oc.ServiceCats)+len(oc.ResourceOwners))
|
||||
|
||||
for k := range oc.ServiceCats {
|
||||
tk, tv := makeTagKV(k)
|
||||
res[tk] = tv
|
||||
}
|
||||
|
||||
for k := range oc.ResourceOwners {
|
||||
tk, tv := makeTagKV(k)
|
||||
res[tk] = tv
|
||||
}
|
||||
|
||||
return res
|
||||
}
|
||||
|
||||
// getLastIdx searches for manifests contained in both foundMans and metas
|
||||
// and returns the most recent complete manifest index and the manifest it
|
||||
// corresponds to. If no complete manifest is in both lists returns nil, -1.
|
||||
|
||||
@ -15,6 +15,7 @@ import (
|
||||
"github.com/hashicorp/go-multierror"
|
||||
"github.com/kopia/kopia/fs"
|
||||
"github.com/kopia/kopia/fs/virtualfs"
|
||||
"github.com/kopia/kopia/repo/manifest"
|
||||
"github.com/kopia/kopia/snapshot/snapshotfs"
|
||||
"github.com/pkg/errors"
|
||||
|
||||
@ -121,6 +122,7 @@ type itemDetails struct {
|
||||
info *details.ItemInfo
|
||||
repoPath path.Path
|
||||
prevPath path.Path
|
||||
cached bool
|
||||
}
|
||||
|
||||
type corsoProgress struct {
|
||||
@ -179,7 +181,7 @@ func (cp *corsoProgress) FinishedFile(relativePath string, err error) {
|
||||
d.repoPath.String(),
|
||||
d.repoPath.ShortRef(),
|
||||
parent.ShortRef(),
|
||||
true,
|
||||
!d.cached,
|
||||
*d.info,
|
||||
)
|
||||
|
||||
@ -187,7 +189,7 @@ func (cp *corsoProgress) FinishedFile(relativePath string, err error) {
|
||||
cp.deets.AddFoldersForItem(
|
||||
folders,
|
||||
*d.info,
|
||||
true, // itemUpdated = true
|
||||
!d.cached,
|
||||
)
|
||||
}
|
||||
|
||||
@ -199,6 +201,20 @@ func (cp *corsoProgress) FinishedHashingFile(fname string, bs int64) {
|
||||
atomic.AddInt64(&cp.totalBytes, bs)
|
||||
}
|
||||
|
||||
// Kopia interface function used as a callback when kopia detects a previously
|
||||
// uploaded file that matches the current file and skips uploading the new
|
||||
// (duplicate) version.
|
||||
func (cp *corsoProgress) CachedFile(fname string, size int64) {
|
||||
defer cp.UploadProgress.CachedFile(fname, size)
|
||||
|
||||
d := cp.get(fname)
|
||||
if d == nil {
|
||||
return
|
||||
}
|
||||
|
||||
d.cached = true
|
||||
}
|
||||
|
||||
func (cp *corsoProgress) put(k string, v *itemDetails) {
|
||||
cp.mu.Lock()
|
||||
defer cp.mu.Unlock()
|
||||
@ -271,7 +287,6 @@ func collectionEntries(
|
||||
continue
|
||||
}
|
||||
|
||||
log.Debugw("reading item", "path", itemPath.String())
|
||||
trace.Log(ctx, "kopia:streamEntries:item", itemPath.String())
|
||||
|
||||
if e.Deleted() {
|
||||
@ -870,6 +885,17 @@ func inflateDirTree(
|
||||
return nil, errors.Wrap(err, "inflating collection tree")
|
||||
}
|
||||
|
||||
baseIDs := make([]manifest.ID, 0, len(baseSnaps))
|
||||
for _, snap := range baseSnaps {
|
||||
baseIDs = append(baseIDs, snap.ID)
|
||||
}
|
||||
|
||||
logger.Ctx(ctx).Infow(
|
||||
"merging hierarchies from base snapshots",
|
||||
"snapshot_ids",
|
||||
baseIDs,
|
||||
)
|
||||
|
||||
for _, snap := range baseSnaps {
|
||||
if err = inflateBaseTree(ctx, loader, snap, updatedPaths, roots); err != nil {
|
||||
return nil, errors.Wrap(err, "inflating base snapshot tree(s)")
|
||||
|
||||
@ -433,8 +433,24 @@ var finishedFileTable = []struct {
|
||||
}
|
||||
|
||||
func (suite *CorsoProgressUnitSuite) TestFinishedFile() {
|
||||
table := []struct {
|
||||
name string
|
||||
cached bool
|
||||
}{
|
||||
{
|
||||
name: "all updated",
|
||||
cached: false,
|
||||
},
|
||||
{
|
||||
name: "all cached",
|
||||
cached: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, cachedTest := range table {
|
||||
suite.T().Run(cachedTest.name, func(outerT *testing.T) {
|
||||
for _, test := range finishedFileTable {
|
||||
suite.T().Run(test.name, func(t *testing.T) {
|
||||
outerT.Run(test.name, func(t *testing.T) {
|
||||
bd := &details.Builder{}
|
||||
cp := corsoProgress{
|
||||
UploadProgress: &snapshotfs.NullUploadProgress{},
|
||||
@ -451,11 +467,24 @@ func (suite *CorsoProgressUnitSuite) TestFinishedFile() {
|
||||
require.Len(t, cp.pending, len(ci))
|
||||
|
||||
for k, v := range ci {
|
||||
if cachedTest.cached {
|
||||
cp.CachedFile(k, 42)
|
||||
}
|
||||
|
||||
cp.FinishedFile(k, v.err)
|
||||
}
|
||||
|
||||
assert.Empty(t, cp.pending)
|
||||
assert.Len(t, bd.Details().Entries, test.expectedNumEntries)
|
||||
|
||||
entries := bd.Details().Entries
|
||||
|
||||
assert.Len(t, entries, test.expectedNumEntries)
|
||||
|
||||
for _, entry := range entries {
|
||||
assert.Equal(t, !cachedTest.cached, entry.Updated)
|
||||
}
|
||||
})
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@ -178,9 +178,36 @@ func (w Wrapper) makeSnapshotWithRoot(
|
||||
bc = &stats.ByteCounter{}
|
||||
)
|
||||
|
||||
snapIDs := make([]manifest.ID, 0, len(prevSnapEntries))
|
||||
prevSnaps := make([]*snapshot.Manifest, 0, len(prevSnapEntries))
|
||||
|
||||
for _, ent := range prevSnapEntries {
|
||||
prevSnaps = append(prevSnaps, ent.Manifest)
|
||||
snapIDs = append(snapIDs, ent.ID)
|
||||
}
|
||||
|
||||
logger.Ctx(ctx).Infow(
|
||||
"using snapshots for kopia-assisted incrementals",
|
||||
"snapshot_ids",
|
||||
snapIDs,
|
||||
)
|
||||
|
||||
checkpointTagK, checkpointTagV := makeTagKV(checkpointTagKey)
|
||||
|
||||
tags := map[string]string{}
|
||||
checkpointTags := map[string]string{
|
||||
checkpointTagK: checkpointTagV,
|
||||
}
|
||||
|
||||
for k, v := range addlTags {
|
||||
mk, mv := makeTagKV(k)
|
||||
|
||||
if len(v) == 0 {
|
||||
v = mv
|
||||
}
|
||||
|
||||
tags[mk] = v
|
||||
checkpointTags[mk] = v
|
||||
}
|
||||
|
||||
err := repo.WriteSession(
|
||||
@ -219,6 +246,7 @@ func (w Wrapper) makeSnapshotWithRoot(
|
||||
u := snapshotfs.NewUploader(rw)
|
||||
progress.UploadProgress = u.Progress
|
||||
u.Progress = progress
|
||||
u.CheckpointLabels = checkpointTags
|
||||
|
||||
man, err = u.Upload(innerCtx, root, policyTree, si, prevSnaps...)
|
||||
if err != nil {
|
||||
@ -227,17 +255,7 @@ func (w Wrapper) makeSnapshotWithRoot(
|
||||
return err
|
||||
}
|
||||
|
||||
man.Tags = map[string]string{}
|
||||
|
||||
for k, v := range addlTags {
|
||||
mk, mv := makeTagKV(k)
|
||||
|
||||
if len(v) == 0 {
|
||||
v = mv
|
||||
}
|
||||
|
||||
man.Tags[mk] = v
|
||||
}
|
||||
man.Tags = tags
|
||||
|
||||
if _, err := snapshot.SaveSnapshot(innerCtx, rw, man); err != nil {
|
||||
err = errors.Wrap(err, "saving snapshot")
|
||||
|
||||
@ -241,16 +241,20 @@ func (suite *KopiaIntegrationSuite) TestBackupCollections() {
|
||||
name string
|
||||
expectedUploadedFiles int
|
||||
expectedCachedFiles int
|
||||
// Whether entries in the resulting details should be marked as updated.
|
||||
deetsUpdated bool
|
||||
}{
|
||||
{
|
||||
name: "Uncached",
|
||||
expectedUploadedFiles: 47,
|
||||
expectedCachedFiles: 0,
|
||||
deetsUpdated: true,
|
||||
},
|
||||
{
|
||||
name: "Cached",
|
||||
expectedUploadedFiles: 0,
|
||||
expectedCachedFiles: 47,
|
||||
deetsUpdated: false,
|
||||
},
|
||||
}
|
||||
|
||||
@ -274,13 +278,19 @@ func (suite *KopiaIntegrationSuite) TestBackupCollections() {
|
||||
assert.Equal(t, 0, stats.IgnoredErrorCount)
|
||||
assert.Equal(t, 0, stats.ErrorCount)
|
||||
assert.False(t, stats.Incomplete)
|
||||
|
||||
// 47 file and 6 folder entries.
|
||||
details := deets.Details().Entries
|
||||
assert.Len(
|
||||
t,
|
||||
deets.Details().Entries,
|
||||
details,
|
||||
test.expectedUploadedFiles+test.expectedCachedFiles+6,
|
||||
)
|
||||
|
||||
for _, entry := range details {
|
||||
assert.Equal(t, test.deetsUpdated, entry.Updated)
|
||||
}
|
||||
|
||||
checkSnapshotTags(
|
||||
t,
|
||||
suite.ctx,
|
||||
|
||||
@ -7,10 +7,13 @@ import (
|
||||
"os"
|
||||
"sync"
|
||||
|
||||
"github.com/dustin/go-humanize"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/pflag"
|
||||
"github.com/vbauerster/mpb/v8"
|
||||
"github.com/vbauerster/mpb/v8/decor"
|
||||
|
||||
"github.com/alcionai/corso/src/pkg/logger"
|
||||
)
|
||||
|
||||
const (
|
||||
@ -127,15 +130,17 @@ func Complete() {
|
||||
}
|
||||
|
||||
const (
|
||||
ItemBackupMsg = "Backing up item:"
|
||||
ItemRestoreMsg = "Restoring item:"
|
||||
ItemQueueMsg = "Queuing items:"
|
||||
ItemBackupMsg = "Backing up item"
|
||||
ItemRestoreMsg = "Restoring item"
|
||||
ItemQueueMsg = "Queuing items"
|
||||
)
|
||||
|
||||
// Progress Updates
|
||||
|
||||
// Message is used to display a progress message
|
||||
func Message(message string) {
|
||||
func Message(ctx context.Context, message string) {
|
||||
logger.Ctx(ctx).Info(message)
|
||||
|
||||
if cfg.hidden() {
|
||||
return
|
||||
}
|
||||
@ -153,12 +158,15 @@ func Message(message string) {
|
||||
// Complete the bar immediately
|
||||
bar.SetTotal(-1, true)
|
||||
|
||||
waitAndCloseBar(bar)()
|
||||
waitAndCloseBar(bar, func() {})()
|
||||
}
|
||||
|
||||
// MessageWithCompletion is used to display progress with a spinner
|
||||
// that switches to "done" when the completion channel is signalled
|
||||
func MessageWithCompletion(message string) (chan<- struct{}, func()) {
|
||||
func MessageWithCompletion(ctx context.Context, message string) (chan<- struct{}, func()) {
|
||||
log := logger.Ctx(ctx)
|
||||
log.Info(message)
|
||||
|
||||
completionCh := make(chan struct{}, 1)
|
||||
|
||||
if cfg.hidden() {
|
||||
@ -173,7 +181,7 @@ func MessageWithCompletion(message string) (chan<- struct{}, func()) {
|
||||
-1,
|
||||
mpb.SpinnerStyle(frames...).PositionLeft(),
|
||||
mpb.PrependDecorators(
|
||||
decor.Name(message),
|
||||
decor.Name(message+":"),
|
||||
decor.Elapsed(decor.ET_STYLE_GO, decor.WC{W: 8}),
|
||||
),
|
||||
mpb.BarFillerOnComplete("done"),
|
||||
@ -192,7 +200,11 @@ func MessageWithCompletion(message string) (chan<- struct{}, func()) {
|
||||
}
|
||||
}(completionCh)
|
||||
|
||||
return completionCh, waitAndCloseBar(bar)
|
||||
wacb := waitAndCloseBar(bar, func() {
|
||||
log.Info("done - " + message)
|
||||
})
|
||||
|
||||
return completionCh, wacb
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
@ -202,7 +214,15 @@ func MessageWithCompletion(message string) (chan<- struct{}, func()) {
|
||||
// ItemProgress tracks the display of an item in a folder by counting the bytes
|
||||
// read through the provided readcloser, up until the byte count matches
|
||||
// the totalBytes.
|
||||
func ItemProgress(rc io.ReadCloser, header, iname string, totalBytes int64) (io.ReadCloser, func()) {
|
||||
func ItemProgress(
|
||||
ctx context.Context,
|
||||
rc io.ReadCloser,
|
||||
header, iname string,
|
||||
totalBytes int64,
|
||||
) (io.ReadCloser, func()) {
|
||||
log := logger.Ctx(ctx).With("item", iname, "size", humanize.Bytes(uint64(totalBytes)))
|
||||
log.Debug(header)
|
||||
|
||||
if cfg.hidden() || rc == nil || totalBytes == 0 {
|
||||
return rc, func() {}
|
||||
}
|
||||
@ -224,14 +244,23 @@ func ItemProgress(rc io.ReadCloser, header, iname string, totalBytes int64) (io.
|
||||
|
||||
bar := progress.New(totalBytes, mpb.NopStyle(), barOpts...)
|
||||
|
||||
return bar.ProxyReader(rc), waitAndCloseBar(bar)
|
||||
wacb := waitAndCloseBar(bar, func() {
|
||||
// might be overly chatty, we can remove if needed.
|
||||
log.Debug("done - " + header)
|
||||
})
|
||||
|
||||
return bar.ProxyReader(rc), wacb
|
||||
}
|
||||
|
||||
// ProgressWithCount tracks the display of a bar that tracks the completion
|
||||
// of the specified count.
|
||||
// Each write to the provided channel counts as a single increment.
|
||||
// The caller is expected to close the channel.
|
||||
func ProgressWithCount(header, message string, count int64) (chan<- struct{}, func()) {
|
||||
func ProgressWithCount(ctx context.Context, header, message string, count int64) (chan<- struct{}, func()) {
|
||||
log := logger.Ctx(ctx)
|
||||
lmsg := fmt.Sprintf("%s %s - %d", header, message, count)
|
||||
log.Info(lmsg)
|
||||
|
||||
progressCh := make(chan struct{})
|
||||
|
||||
if cfg.hidden() {
|
||||
@ -282,7 +311,11 @@ func ProgressWithCount(header, message string, count int64) (chan<- struct{}, fu
|
||||
}
|
||||
}(ch)
|
||||
|
||||
return ch, waitAndCloseBar(bar)
|
||||
wacb := waitAndCloseBar(bar, func() {
|
||||
log.Info("done - " + lmsg)
|
||||
})
|
||||
|
||||
return ch, wacb
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
@ -320,7 +353,14 @@ func makeSpinFrames(barWidth int) {
|
||||
// CollectionProgress tracks the display a spinner that idles while the collection
|
||||
// incrementing the count of items handled. Each write to the provided channel
|
||||
// counts as a single increment. The caller is expected to close the channel.
|
||||
func CollectionProgress(user, category, dirName string) (chan<- struct{}, func()) {
|
||||
func CollectionProgress(
|
||||
ctx context.Context,
|
||||
user, category, dirName string,
|
||||
) (chan<- struct{}, func()) {
|
||||
log := logger.Ctx(ctx).With("user", user, "category", category, "dir", dirName)
|
||||
message := "Collecting " + dirName
|
||||
log.Info(message)
|
||||
|
||||
if cfg.hidden() || len(user) == 0 || len(dirName) == 0 {
|
||||
ch := make(chan struct{})
|
||||
|
||||
@ -357,6 +397,8 @@ func CollectionProgress(user, category, dirName string) (chan<- struct{}, func()
|
||||
barOpts...,
|
||||
)
|
||||
|
||||
var counted int
|
||||
|
||||
ch := make(chan struct{})
|
||||
go func(ci <-chan struct{}) {
|
||||
for {
|
||||
@ -371,17 +413,34 @@ func CollectionProgress(user, category, dirName string) (chan<- struct{}, func()
|
||||
return
|
||||
}
|
||||
|
||||
counted++
|
||||
|
||||
bar.Increment()
|
||||
}
|
||||
}
|
||||
}(ch)
|
||||
|
||||
return ch, waitAndCloseBar(bar)
|
||||
wacb := waitAndCloseBar(bar, func() {
|
||||
log.Infow("done - "+message, "count", counted)
|
||||
})
|
||||
|
||||
return ch, wacb
|
||||
}
|
||||
|
||||
func waitAndCloseBar(bar *mpb.Bar) func() {
|
||||
func waitAndCloseBar(bar *mpb.Bar, log func()) func() {
|
||||
return func() {
|
||||
bar.Wait()
|
||||
wg.Done()
|
||||
log()
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// other funcs
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// Bulletf prepends the message with "∙ ", and formats it.
|
||||
// Ex: Bulletf("%s", "foo") => "∙ foo"
|
||||
func Bulletf(template string, vs ...any) string {
|
||||
return fmt.Sprintf("∙ "+template, vs...)
|
||||
}
|
||||
|
||||
@ -44,6 +44,7 @@ func (suite *ObserveProgressUnitSuite) TestItemProgress() {
|
||||
|
||||
from := make([]byte, 100)
|
||||
prog, closer := observe.ItemProgress(
|
||||
ctx,
|
||||
io.NopCloser(bytes.NewReader(from)),
|
||||
"folder",
|
||||
"test",
|
||||
@ -96,7 +97,7 @@ func (suite *ObserveProgressUnitSuite) TestCollectionProgress_unblockOnCtxCancel
|
||||
observe.SeedWriter(context.Background(), nil, nil)
|
||||
}()
|
||||
|
||||
progCh, closer := observe.CollectionProgress("test", "testcat", "testertons")
|
||||
progCh, closer := observe.CollectionProgress(ctx, "test", "testcat", "testertons")
|
||||
require.NotNil(t, progCh)
|
||||
require.NotNil(t, closer)
|
||||
|
||||
@ -131,7 +132,7 @@ func (suite *ObserveProgressUnitSuite) TestCollectionProgress_unblockOnChannelCl
|
||||
observe.SeedWriter(context.Background(), nil, nil)
|
||||
}()
|
||||
|
||||
progCh, closer := observe.CollectionProgress("test", "testcat", "testertons")
|
||||
progCh, closer := observe.CollectionProgress(ctx, "test", "testcat", "testertons")
|
||||
require.NotNil(t, progCh)
|
||||
require.NotNil(t, closer)
|
||||
|
||||
@ -163,7 +164,7 @@ func (suite *ObserveProgressUnitSuite) TestObserveProgress() {
|
||||
|
||||
message := "Test Message"
|
||||
|
||||
observe.Message(message)
|
||||
observe.Message(ctx, message)
|
||||
observe.Complete()
|
||||
require.NotEmpty(suite.T(), recorder.String())
|
||||
require.Contains(suite.T(), recorder.String(), message)
|
||||
@ -184,7 +185,7 @@ func (suite *ObserveProgressUnitSuite) TestObserveProgressWithCompletion() {
|
||||
|
||||
message := "Test Message"
|
||||
|
||||
ch, closer := observe.MessageWithCompletion(message)
|
||||
ch, closer := observe.MessageWithCompletion(ctx, message)
|
||||
|
||||
// Trigger completion
|
||||
ch <- struct{}{}
|
||||
@ -214,7 +215,7 @@ func (suite *ObserveProgressUnitSuite) TestObserveProgressWithChannelClosed() {
|
||||
|
||||
message := "Test Message"
|
||||
|
||||
ch, closer := observe.MessageWithCompletion(message)
|
||||
ch, closer := observe.MessageWithCompletion(ctx, message)
|
||||
|
||||
// Close channel without completing
|
||||
close(ch)
|
||||
@ -246,7 +247,7 @@ func (suite *ObserveProgressUnitSuite) TestObserveProgressWithContextCancelled()
|
||||
|
||||
message := "Test Message"
|
||||
|
||||
_, closer := observe.MessageWithCompletion(message)
|
||||
_, closer := observe.MessageWithCompletion(ctx, message)
|
||||
|
||||
// cancel context
|
||||
cancel()
|
||||
@ -277,7 +278,7 @@ func (suite *ObserveProgressUnitSuite) TestObserveProgressWithCount() {
|
||||
message := "Test Message"
|
||||
count := 3
|
||||
|
||||
ch, closer := observe.ProgressWithCount(header, message, int64(count))
|
||||
ch, closer := observe.ProgressWithCount(ctx, header, message, int64(count))
|
||||
|
||||
for i := 0; i < count; i++ {
|
||||
ch <- struct{}{}
|
||||
@ -310,7 +311,7 @@ func (suite *ObserveProgressUnitSuite) TestObserveProgressWithCountChannelClosed
|
||||
message := "Test Message"
|
||||
count := 3
|
||||
|
||||
ch, closer := observe.ProgressWithCount(header, message, int64(count))
|
||||
ch, closer := observe.ProgressWithCount(ctx, header, message, int64(count))
|
||||
|
||||
close(ch)
|
||||
|
||||
|
||||
@ -6,12 +6,10 @@ import (
|
||||
|
||||
"github.com/google/uuid"
|
||||
multierror "github.com/hashicorp/go-multierror"
|
||||
"github.com/kopia/kopia/repo/manifest"
|
||||
"github.com/pkg/errors"
|
||||
|
||||
"github.com/alcionai/corso/src/internal/common"
|
||||
"github.com/alcionai/corso/src/internal/connector"
|
||||
"github.com/alcionai/corso/src/internal/connector/graph"
|
||||
"github.com/alcionai/corso/src/internal/connector/support"
|
||||
"github.com/alcionai/corso/src/internal/data"
|
||||
D "github.com/alcionai/corso/src/internal/diagnostics"
|
||||
@ -218,6 +216,11 @@ func (op *BackupOperation) Run(ctx context.Context) (err error) {
|
||||
// checker to see if conditions are correct for incremental backup behavior such as
|
||||
// retrieving metadata like delta tokens and previous paths.
|
||||
func useIncrementalBackup(sel selectors.Selector, opts control.Options) bool {
|
||||
// Delta-based incrementals currently only supported for Exchange
|
||||
if sel.Service != selectors.ServiceExchange {
|
||||
return false
|
||||
}
|
||||
|
||||
return !opts.ToggleFeatures.DisableIncrementals
|
||||
}
|
||||
|
||||
@ -233,7 +236,7 @@ func produceBackupDataCollections(
|
||||
metadata []data.Collection,
|
||||
ctrlOpts control.Options,
|
||||
) ([]data.Collection, error) {
|
||||
complete, closer := observe.MessageWithCompletion("Discovering items to backup:")
|
||||
complete, closer := observe.MessageWithCompletion(ctx, "Discovering items to backup")
|
||||
defer func() {
|
||||
complete <- struct{}{}
|
||||
close(complete)
|
||||
@ -257,178 +260,6 @@ type backuper interface {
|
||||
) (*kopia.BackupStats, *details.Builder, map[string]path.Path, error)
|
||||
}
|
||||
|
||||
func verifyDistinctBases(mans []*kopia.ManifestEntry) error {
|
||||
var (
|
||||
errs *multierror.Error
|
||||
reasons = map[string]manifest.ID{}
|
||||
)
|
||||
|
||||
for _, man := range mans {
|
||||
// Incomplete snapshots are used only for kopia-assisted incrementals. The
|
||||
// fact that we need this check here makes it seem like this should live in
|
||||
// the kopia code. However, keeping it here allows for better debugging as
|
||||
// the kopia code only has access to a path builder which means it cannot
|
||||
// remove the resource owner from the error/log output. That is also below
|
||||
// the point where we decide if we should do a full backup or an
|
||||
// incremental.
|
||||
if len(man.IncompleteReason) > 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
for _, reason := range man.Reasons {
|
||||
reasonKey := reason.ResourceOwner + reason.Service.String() + reason.Category.String()
|
||||
|
||||
if b, ok := reasons[reasonKey]; ok {
|
||||
errs = multierror.Append(errs, errors.Errorf(
|
||||
"multiple base snapshots source data for %s %s. IDs: %s, %s",
|
||||
reason.Service.String(),
|
||||
reason.Category.String(),
|
||||
b,
|
||||
man.ID,
|
||||
))
|
||||
|
||||
continue
|
||||
}
|
||||
|
||||
reasons[reasonKey] = man.ID
|
||||
}
|
||||
}
|
||||
|
||||
return errs.ErrorOrNil()
|
||||
}
|
||||
|
||||
// calls kopia to retrieve prior backup manifests, metadata collections to supply backup heuristics.
|
||||
func produceManifestsAndMetadata(
|
||||
ctx context.Context,
|
||||
kw *kopia.Wrapper,
|
||||
sw *store.Wrapper,
|
||||
reasons []kopia.Reason,
|
||||
tenantID string,
|
||||
getMetadata bool,
|
||||
) ([]*kopia.ManifestEntry, []data.Collection, bool, error) {
|
||||
var (
|
||||
metadataFiles = graph.AllMetadataFileNames()
|
||||
collections []data.Collection
|
||||
)
|
||||
|
||||
ms, err := kw.FetchPrevSnapshotManifests(
|
||||
ctx,
|
||||
reasons,
|
||||
map[string]string{kopia.TagBackupCategory: ""})
|
||||
if err != nil {
|
||||
return nil, nil, false, err
|
||||
}
|
||||
|
||||
if !getMetadata {
|
||||
return ms, nil, false, nil
|
||||
}
|
||||
|
||||
// We only need to check that we have 1:1 reason:base if we're doing an
|
||||
// incremental with associated metadata. This ensures that we're only sourcing
|
||||
// data from a single Point-In-Time (base) for each incremental backup.
|
||||
//
|
||||
// TODO(ashmrtn): This may need updating if we start sourcing item backup
|
||||
// details from previous snapshots when using kopia-assisted incrementals.
|
||||
if err := verifyDistinctBases(ms); err != nil {
|
||||
logger.Ctx(ctx).Warnw(
|
||||
"base snapshot collision, falling back to full backup",
|
||||
"error",
|
||||
err,
|
||||
)
|
||||
|
||||
return ms, nil, false, nil
|
||||
}
|
||||
|
||||
for _, man := range ms {
|
||||
if len(man.IncompleteReason) > 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
bID, ok := man.GetTag(kopia.TagBackupID)
|
||||
if !ok {
|
||||
return nil, nil, false, errors.New("snapshot manifest missing backup ID")
|
||||
}
|
||||
|
||||
dID, _, err := sw.GetDetailsIDFromBackupID(ctx, model.StableID(bID))
|
||||
if err != nil {
|
||||
// if no backup exists for any of the complete manifests, we want
|
||||
// to fall back to a complete backup.
|
||||
if errors.Is(err, kopia.ErrNotFound) {
|
||||
logger.Ctx(ctx).Infow(
|
||||
"backup missing, falling back to full backup",
|
||||
"backup_id", bID)
|
||||
|
||||
return ms, nil, false, nil
|
||||
}
|
||||
|
||||
return nil, nil, false, errors.Wrap(err, "retrieving prior backup data")
|
||||
}
|
||||
|
||||
// if no detailsID exists for any of the complete manifests, we want
|
||||
// to fall back to a complete backup. This is a temporary prevention
|
||||
// mechanism to keep backups from falling into a perpetually bad state.
|
||||
// This makes an assumption that the ID points to a populated set of
|
||||
// details; we aren't doing the work to look them up.
|
||||
if len(dID) == 0 {
|
||||
logger.Ctx(ctx).Infow(
|
||||
"backup missing details ID, falling back to full backup",
|
||||
"backup_id", bID)
|
||||
|
||||
return ms, nil, false, nil
|
||||
}
|
||||
|
||||
colls, err := collectMetadata(ctx, kw, man, metadataFiles, tenantID)
|
||||
if err != nil && !errors.Is(err, kopia.ErrNotFound) {
|
||||
// prior metadata isn't guaranteed to exist.
|
||||
// if it doesn't, we'll just have to do a
|
||||
// full backup for that data.
|
||||
return nil, nil, false, err
|
||||
}
|
||||
|
||||
collections = append(collections, colls...)
|
||||
}
|
||||
|
||||
return ms, collections, true, err
|
||||
}
|
||||
|
||||
func collectMetadata(
|
||||
ctx context.Context,
|
||||
r restorer,
|
||||
man *kopia.ManifestEntry,
|
||||
fileNames []string,
|
||||
tenantID string,
|
||||
) ([]data.Collection, error) {
|
||||
paths := []path.Path{}
|
||||
|
||||
for _, fn := range fileNames {
|
||||
for _, reason := range man.Reasons {
|
||||
p, err := path.Builder{}.
|
||||
Append(fn).
|
||||
ToServiceCategoryMetadataPath(
|
||||
tenantID,
|
||||
reason.ResourceOwner,
|
||||
reason.Service,
|
||||
reason.Category,
|
||||
true)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "building metadata path")
|
||||
}
|
||||
|
||||
paths = append(paths, p)
|
||||
}
|
||||
}
|
||||
|
||||
dcs, err := r.RestoreMultipleItems(ctx, string(man.ID), paths, nil)
|
||||
if err != nil {
|
||||
// Restore is best-effort and we want to keep it that way since we want to
|
||||
// return as much metadata as we can to reduce the work we'll need to do.
|
||||
// Just wrap the error here for better reporting/debugging.
|
||||
return dcs, errors.Wrap(err, "collecting prior metadata")
|
||||
}
|
||||
|
||||
return dcs, nil
|
||||
}
|
||||
|
||||
func selectorToReasons(sel selectors.Selector) []kopia.Reason {
|
||||
service := sel.PathService()
|
||||
reasons := []kopia.Reason{}
|
||||
@ -487,7 +318,7 @@ func consumeBackupDataCollections(
|
||||
backupID model.StableID,
|
||||
isIncremental bool,
|
||||
) (*kopia.BackupStats, *details.Builder, map[string]path.Path, error) {
|
||||
complete, closer := observe.MessageWithCompletion("Backing up data:")
|
||||
complete, closer := observe.MessageWithCompletion(ctx, "Backing up data")
|
||||
defer func() {
|
||||
complete <- struct{}{}
|
||||
close(complete)
|
||||
@ -509,6 +340,8 @@ func consumeBackupDataCollections(
|
||||
|
||||
for _, m := range mans {
|
||||
paths := make([]*path.Builder, 0, len(m.Reasons))
|
||||
services := map[string]struct{}{}
|
||||
categories := map[string]struct{}{}
|
||||
|
||||
for _, reason := range m.Reasons {
|
||||
pb, err := builderFromReason(tenantID, reason)
|
||||
@ -517,12 +350,34 @@ func consumeBackupDataCollections(
|
||||
}
|
||||
|
||||
paths = append(paths, pb)
|
||||
services[reason.Service.String()] = struct{}{}
|
||||
categories[reason.Category.String()] = struct{}{}
|
||||
}
|
||||
|
||||
bases = append(bases, kopia.IncrementalBase{
|
||||
Manifest: m.Manifest,
|
||||
SubtreePaths: paths,
|
||||
})
|
||||
|
||||
svcs := make([]string, 0, len(services))
|
||||
for k := range services {
|
||||
svcs = append(svcs, k)
|
||||
}
|
||||
|
||||
cats := make([]string, 0, len(categories))
|
||||
for k := range categories {
|
||||
cats = append(cats, k)
|
||||
}
|
||||
|
||||
logger.Ctx(ctx).Infow(
|
||||
"using base for backup",
|
||||
"snapshot_id",
|
||||
m.ID,
|
||||
"services",
|
||||
svcs,
|
||||
"categories",
|
||||
cats,
|
||||
)
|
||||
}
|
||||
|
||||
return bu.BackupCollections(ctx, bases, cs, tags, isIncremental)
|
||||
|
||||
@ -36,6 +36,25 @@ import (
|
||||
|
||||
type mockRestorer struct {
|
||||
gotPaths []path.Path
|
||||
colls []data.Collection
|
||||
collsByID map[string][]data.Collection // snapshotID: []Collection
|
||||
err error
|
||||
onRestore restoreFunc
|
||||
}
|
||||
|
||||
type restoreFunc func(id string, ps []path.Path) ([]data.Collection, error)
|
||||
|
||||
func (mr *mockRestorer) buildRestoreFunc(
|
||||
t *testing.T,
|
||||
oid string,
|
||||
ops []path.Path,
|
||||
) {
|
||||
mr.onRestore = func(id string, ps []path.Path) ([]data.Collection, error) {
|
||||
assert.Equal(t, oid, id, "manifest id")
|
||||
checkPaths(t, ops, ps)
|
||||
|
||||
return mr.colls, mr.err
|
||||
}
|
||||
}
|
||||
|
||||
func (mr *mockRestorer) RestoreMultipleItems(
|
||||
@ -46,13 +65,19 @@ func (mr *mockRestorer) RestoreMultipleItems(
|
||||
) ([]data.Collection, error) {
|
||||
mr.gotPaths = append(mr.gotPaths, paths...)
|
||||
|
||||
return nil, nil
|
||||
if mr.onRestore != nil {
|
||||
return mr.onRestore(snapshotID, paths)
|
||||
}
|
||||
|
||||
if len(mr.collsByID) > 0 {
|
||||
return mr.collsByID[snapshotID], mr.err
|
||||
}
|
||||
|
||||
return mr.colls, mr.err
|
||||
}
|
||||
|
||||
func (mr mockRestorer) checkPaths(t *testing.T, expected []path.Path) {
|
||||
t.Helper()
|
||||
|
||||
assert.ElementsMatch(t, expected, mr.gotPaths)
|
||||
func checkPaths(t *testing.T, expected, got []path.Path) {
|
||||
assert.ElementsMatch(t, expected, got)
|
||||
}
|
||||
|
||||
// ----- backup producer
|
||||
@ -168,6 +193,27 @@ func (mbs mockBackupStorer) Update(context.Context, model.Schema, model.Model) e
|
||||
// helper funcs
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// expects you to Append your own file
|
||||
func makeMetadataBasePath(
|
||||
t *testing.T,
|
||||
tenant string,
|
||||
service path.ServiceType,
|
||||
resourceOwner string,
|
||||
category path.CategoryType,
|
||||
) path.Path {
|
||||
t.Helper()
|
||||
|
||||
p, err := path.Builder{}.ToServiceCategoryMetadataPath(
|
||||
tenant,
|
||||
resourceOwner,
|
||||
service,
|
||||
category,
|
||||
false)
|
||||
require.NoError(t, err)
|
||||
|
||||
return p
|
||||
}
|
||||
|
||||
func makeMetadataPath(
|
||||
t *testing.T,
|
||||
tenant string,
|
||||
@ -183,8 +229,7 @@ func makeMetadataPath(
|
||||
resourceOwner,
|
||||
service,
|
||||
category,
|
||||
true,
|
||||
)
|
||||
true)
|
||||
require.NoError(t, err)
|
||||
|
||||
return p
|
||||
@ -635,7 +680,7 @@ func (suite *BackupOpSuite) TestBackupOperation_CollectMetadata() {
|
||||
_, err := collectMetadata(ctx, mr, test.inputMan, test.inputFiles, tenant)
|
||||
assert.NoError(t, err)
|
||||
|
||||
mr.checkPaths(t, test.expected)
|
||||
checkPaths(t, test.expected, mr.gotPaths)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
210
src/internal/operations/manifests.go
Normal file
210
src/internal/operations/manifests.go
Normal file
@ -0,0 +1,210 @@
|
||||
package operations
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
multierror "github.com/hashicorp/go-multierror"
|
||||
"github.com/kopia/kopia/repo/manifest"
|
||||
"github.com/pkg/errors"
|
||||
|
||||
"github.com/alcionai/corso/src/internal/connector/graph"
|
||||
"github.com/alcionai/corso/src/internal/data"
|
||||
"github.com/alcionai/corso/src/internal/kopia"
|
||||
"github.com/alcionai/corso/src/internal/model"
|
||||
"github.com/alcionai/corso/src/pkg/backup"
|
||||
"github.com/alcionai/corso/src/pkg/logger"
|
||||
"github.com/alcionai/corso/src/pkg/path"
|
||||
)
|
||||
|
||||
type manifestFetcher interface {
|
||||
FetchPrevSnapshotManifests(
|
||||
ctx context.Context,
|
||||
reasons []kopia.Reason,
|
||||
tags map[string]string,
|
||||
) ([]*kopia.ManifestEntry, error)
|
||||
}
|
||||
|
||||
type manifestRestorer interface {
|
||||
manifestFetcher
|
||||
restorer
|
||||
}
|
||||
|
||||
type getDetailsIDer interface {
|
||||
GetDetailsIDFromBackupID(
|
||||
ctx context.Context,
|
||||
backupID model.StableID,
|
||||
) (string, *backup.Backup, error)
|
||||
}
|
||||
|
||||
// calls kopia to retrieve prior backup manifests, metadata collections to supply backup heuristics.
|
||||
func produceManifestsAndMetadata(
|
||||
ctx context.Context,
|
||||
mr manifestRestorer,
|
||||
gdi getDetailsIDer,
|
||||
reasons []kopia.Reason,
|
||||
tenantID string,
|
||||
getMetadata bool,
|
||||
) ([]*kopia.ManifestEntry, []data.Collection, bool, error) {
|
||||
var (
|
||||
metadataFiles = graph.AllMetadataFileNames()
|
||||
collections []data.Collection
|
||||
)
|
||||
|
||||
ms, err := mr.FetchPrevSnapshotManifests(
|
||||
ctx,
|
||||
reasons,
|
||||
map[string]string{kopia.TagBackupCategory: ""})
|
||||
if err != nil {
|
||||
return nil, nil, false, err
|
||||
}
|
||||
|
||||
if !getMetadata {
|
||||
return ms, nil, false, nil
|
||||
}
|
||||
|
||||
// We only need to check that we have 1:1 reason:base if we're doing an
|
||||
// incremental with associated metadata. This ensures that we're only sourcing
|
||||
// data from a single Point-In-Time (base) for each incremental backup.
|
||||
//
|
||||
// TODO(ashmrtn): This may need updating if we start sourcing item backup
|
||||
// details from previous snapshots when using kopia-assisted incrementals.
|
||||
if err := verifyDistinctBases(ms); err != nil {
|
||||
logger.Ctx(ctx).Warnw(
|
||||
"base snapshot collision, falling back to full backup",
|
||||
"error",
|
||||
err,
|
||||
)
|
||||
|
||||
return ms, nil, false, nil
|
||||
}
|
||||
|
||||
for _, man := range ms {
|
||||
if len(man.IncompleteReason) > 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
bID, ok := man.GetTag(kopia.TagBackupID)
|
||||
if !ok {
|
||||
return nil, nil, false, errors.New("snapshot manifest missing backup ID")
|
||||
}
|
||||
|
||||
dID, _, err := gdi.GetDetailsIDFromBackupID(ctx, model.StableID(bID))
|
||||
if err != nil {
|
||||
// if no backup exists for any of the complete manifests, we want
|
||||
// to fall back to a complete backup.
|
||||
if errors.Is(err, kopia.ErrNotFound) {
|
||||
logger.Ctx(ctx).Infow(
|
||||
"backup missing, falling back to full backup",
|
||||
"backup_id", bID)
|
||||
|
||||
return ms, nil, false, nil
|
||||
}
|
||||
|
||||
return nil, nil, false, errors.Wrap(err, "retrieving prior backup data")
|
||||
}
|
||||
|
||||
// if no detailsID exists for any of the complete manifests, we want
|
||||
// to fall back to a complete backup. This is a temporary prevention
|
||||
// mechanism to keep backups from falling into a perpetually bad state.
|
||||
// This makes an assumption that the ID points to a populated set of
|
||||
// details; we aren't doing the work to look them up.
|
||||
if len(dID) == 0 {
|
||||
logger.Ctx(ctx).Infow(
|
||||
"backup missing details ID, falling back to full backup",
|
||||
"backup_id", bID)
|
||||
|
||||
return ms, nil, false, nil
|
||||
}
|
||||
|
||||
colls, err := collectMetadata(ctx, mr, man, metadataFiles, tenantID)
|
||||
if err != nil && !errors.Is(err, kopia.ErrNotFound) {
|
||||
// prior metadata isn't guaranteed to exist.
|
||||
// if it doesn't, we'll just have to do a
|
||||
// full backup for that data.
|
||||
return nil, nil, false, err
|
||||
}
|
||||
|
||||
collections = append(collections, colls...)
|
||||
}
|
||||
|
||||
return ms, collections, true, err
|
||||
}
|
||||
|
||||
// verifyDistinctBases is a validation checker that ensures, for a given slice
|
||||
// of manifests, that each manifest's Reason (owner, service, category) is only
|
||||
// included once. If a reason is duplicated by any two manifests, an error is
|
||||
// returned.
|
||||
func verifyDistinctBases(mans []*kopia.ManifestEntry) error {
|
||||
var (
|
||||
errs *multierror.Error
|
||||
reasons = map[string]manifest.ID{}
|
||||
)
|
||||
|
||||
for _, man := range mans {
|
||||
// Incomplete snapshots are used only for kopia-assisted incrementals. The
|
||||
// fact that we need this check here makes it seem like this should live in
|
||||
// the kopia code. However, keeping it here allows for better debugging as
|
||||
// the kopia code only has access to a path builder which means it cannot
|
||||
// remove the resource owner from the error/log output. That is also below
|
||||
// the point where we decide if we should do a full backup or an incremental.
|
||||
if len(man.IncompleteReason) > 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
for _, reason := range man.Reasons {
|
||||
reasonKey := reason.ResourceOwner + reason.Service.String() + reason.Category.String()
|
||||
|
||||
if b, ok := reasons[reasonKey]; ok {
|
||||
errs = multierror.Append(errs, errors.Errorf(
|
||||
"multiple base snapshots source data for %s %s. IDs: %s, %s",
|
||||
reason.Service, reason.Category, b, man.ID,
|
||||
))
|
||||
|
||||
continue
|
||||
}
|
||||
|
||||
reasons[reasonKey] = man.ID
|
||||
}
|
||||
}
|
||||
|
||||
return errs.ErrorOrNil()
|
||||
}
|
||||
|
||||
// collectMetadata retrieves all metadata files associated with the manifest.
|
||||
func collectMetadata(
|
||||
ctx context.Context,
|
||||
r restorer,
|
||||
man *kopia.ManifestEntry,
|
||||
fileNames []string,
|
||||
tenantID string,
|
||||
) ([]data.Collection, error) {
|
||||
paths := []path.Path{}
|
||||
|
||||
for _, fn := range fileNames {
|
||||
for _, reason := range man.Reasons {
|
||||
p, err := path.Builder{}.
|
||||
Append(fn).
|
||||
ToServiceCategoryMetadataPath(
|
||||
tenantID,
|
||||
reason.ResourceOwner,
|
||||
reason.Service,
|
||||
reason.Category,
|
||||
true)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "building metadata path")
|
||||
}
|
||||
|
||||
paths = append(paths, p)
|
||||
}
|
||||
}
|
||||
|
||||
dcs, err := r.RestoreMultipleItems(ctx, string(man.ID), paths, nil)
|
||||
if err != nil {
|
||||
// Restore is best-effort and we want to keep it that way since we want to
|
||||
// return as much metadata as we can to reduce the work we'll need to do.
|
||||
// Just wrap the error here for better reporting/debugging.
|
||||
return dcs, errors.Wrap(err, "collecting prior metadata")
|
||||
}
|
||||
|
||||
return dcs, nil
|
||||
}
|
||||
685
src/internal/operations/manifests_test.go
Normal file
685
src/internal/operations/manifests_test.go
Normal file
@ -0,0 +1,685 @@
|
||||
package operations
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/kopia/kopia/repo/manifest"
|
||||
"github.com/kopia/kopia/snapshot"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/suite"
|
||||
|
||||
"github.com/alcionai/corso/src/internal/data"
|
||||
"github.com/alcionai/corso/src/internal/kopia"
|
||||
"github.com/alcionai/corso/src/internal/model"
|
||||
"github.com/alcionai/corso/src/internal/tester"
|
||||
"github.com/alcionai/corso/src/pkg/backup"
|
||||
"github.com/alcionai/corso/src/pkg/path"
|
||||
)
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// interfaces
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
type mockManifestRestorer struct {
|
||||
mockRestorer
|
||||
mans []*kopia.ManifestEntry
|
||||
mrErr error // err varname already claimed by mockRestorer
|
||||
}
|
||||
|
||||
func (mmr mockManifestRestorer) FetchPrevSnapshotManifests(
|
||||
ctx context.Context,
|
||||
reasons []kopia.Reason,
|
||||
tags map[string]string,
|
||||
) ([]*kopia.ManifestEntry, error) {
|
||||
return mmr.mans, mmr.mrErr
|
||||
}
|
||||
|
||||
type mockGetDetailsIDer struct {
|
||||
detailsID string
|
||||
err error
|
||||
}
|
||||
|
||||
func (mg mockGetDetailsIDer) GetDetailsIDFromBackupID(
|
||||
ctx context.Context,
|
||||
backupID model.StableID,
|
||||
) (string, *backup.Backup, error) {
|
||||
return mg.detailsID, nil, mg.err
|
||||
}
|
||||
|
||||
type mockColl struct {
|
||||
id string // for comparisons
|
||||
p path.Path
|
||||
prevP path.Path
|
||||
}
|
||||
|
||||
func (mc mockColl) Items() <-chan data.Stream {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (mc mockColl) FullPath() path.Path {
|
||||
return mc.p
|
||||
}
|
||||
|
||||
func (mc mockColl) PreviousPath() path.Path {
|
||||
return mc.prevP
|
||||
}
|
||||
|
||||
func (mc mockColl) State() data.CollectionState {
|
||||
return data.NewState
|
||||
}
|
||||
|
||||
func (mc mockColl) DoNotMergeItems() bool {
|
||||
return false
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// tests
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
type OperationsManifestsUnitSuite struct {
|
||||
suite.Suite
|
||||
}
|
||||
|
||||
func TestOperationsManifestsUnitSuite(t *testing.T) {
|
||||
suite.Run(t, new(OperationsManifestsUnitSuite))
|
||||
}
|
||||
|
||||
func (suite *OperationsManifestsUnitSuite) TestCollectMetadata() {
|
||||
const (
|
||||
ro = "owner"
|
||||
tid = "tenantid"
|
||||
)
|
||||
|
||||
var (
|
||||
emailPath = makeMetadataBasePath(
|
||||
suite.T(),
|
||||
tid,
|
||||
path.ExchangeService,
|
||||
ro,
|
||||
path.EmailCategory)
|
||||
contactPath = makeMetadataBasePath(
|
||||
suite.T(),
|
||||
tid,
|
||||
path.ExchangeService,
|
||||
ro,
|
||||
path.ContactsCategory)
|
||||
)
|
||||
|
||||
table := []struct {
|
||||
name string
|
||||
manID string
|
||||
reasons []kopia.Reason
|
||||
fileNames []string
|
||||
expectPaths func(*testing.T, []string) []path.Path
|
||||
expectErr error
|
||||
}{
|
||||
{
|
||||
name: "single reason, single file",
|
||||
manID: "single single",
|
||||
reasons: []kopia.Reason{
|
||||
{
|
||||
ResourceOwner: ro,
|
||||
Service: path.ExchangeService,
|
||||
Category: path.EmailCategory,
|
||||
},
|
||||
},
|
||||
expectPaths: func(t *testing.T, files []string) []path.Path {
|
||||
ps := make([]path.Path, 0, len(files))
|
||||
|
||||
for _, f := range files {
|
||||
p, err := emailPath.Append(f, true)
|
||||
assert.NoError(t, err)
|
||||
ps = append(ps, p)
|
||||
}
|
||||
|
||||
return ps
|
||||
},
|
||||
fileNames: []string{"a"},
|
||||
},
|
||||
{
|
||||
name: "single reason, multiple files",
|
||||
manID: "single multi",
|
||||
reasons: []kopia.Reason{
|
||||
{
|
||||
ResourceOwner: ro,
|
||||
Service: path.ExchangeService,
|
||||
Category: path.EmailCategory,
|
||||
},
|
||||
},
|
||||
expectPaths: func(t *testing.T, files []string) []path.Path {
|
||||
ps := make([]path.Path, 0, len(files))
|
||||
|
||||
for _, f := range files {
|
||||
p, err := emailPath.Append(f, true)
|
||||
assert.NoError(t, err)
|
||||
ps = append(ps, p)
|
||||
}
|
||||
|
||||
return ps
|
||||
},
|
||||
fileNames: []string{"a", "b"},
|
||||
},
|
||||
{
|
||||
name: "multiple reasons, single file",
|
||||
manID: "multi single",
|
||||
reasons: []kopia.Reason{
|
||||
{
|
||||
ResourceOwner: ro,
|
||||
Service: path.ExchangeService,
|
||||
Category: path.EmailCategory,
|
||||
},
|
||||
{
|
||||
ResourceOwner: ro,
|
||||
Service: path.ExchangeService,
|
||||
Category: path.ContactsCategory,
|
||||
},
|
||||
},
|
||||
expectPaths: func(t *testing.T, files []string) []path.Path {
|
||||
ps := make([]path.Path, 0, len(files))
|
||||
|
||||
for _, f := range files {
|
||||
p, err := emailPath.Append(f, true)
|
||||
assert.NoError(t, err)
|
||||
ps = append(ps, p)
|
||||
p, err = contactPath.Append(f, true)
|
||||
assert.NoError(t, err)
|
||||
ps = append(ps, p)
|
||||
}
|
||||
|
||||
return ps
|
||||
},
|
||||
fileNames: []string{"a"},
|
||||
},
|
||||
{
|
||||
name: "multiple reasons, multiple file",
|
||||
manID: "multi multi",
|
||||
reasons: []kopia.Reason{
|
||||
{
|
||||
ResourceOwner: ro,
|
||||
Service: path.ExchangeService,
|
||||
Category: path.EmailCategory,
|
||||
},
|
||||
{
|
||||
ResourceOwner: ro,
|
||||
Service: path.ExchangeService,
|
||||
Category: path.ContactsCategory,
|
||||
},
|
||||
},
|
||||
expectPaths: func(t *testing.T, files []string) []path.Path {
|
||||
ps := make([]path.Path, 0, len(files))
|
||||
|
||||
for _, f := range files {
|
||||
p, err := emailPath.Append(f, true)
|
||||
assert.NoError(t, err)
|
||||
ps = append(ps, p)
|
||||
p, err = contactPath.Append(f, true)
|
||||
assert.NoError(t, err)
|
||||
ps = append(ps, p)
|
||||
}
|
||||
|
||||
return ps
|
||||
},
|
||||
fileNames: []string{"a", "b"},
|
||||
},
|
||||
}
|
||||
for _, test := range table {
|
||||
suite.T().Run(test.name, func(t *testing.T) {
|
||||
ctx, flush := tester.NewContext()
|
||||
defer flush()
|
||||
|
||||
paths := test.expectPaths(t, test.fileNames)
|
||||
|
||||
mr := mockRestorer{err: test.expectErr}
|
||||
mr.buildRestoreFunc(t, test.manID, paths)
|
||||
|
||||
man := &kopia.ManifestEntry{
|
||||
Manifest: &snapshot.Manifest{ID: manifest.ID(test.manID)},
|
||||
Reasons: test.reasons,
|
||||
}
|
||||
|
||||
_, err := collectMetadata(ctx, &mr, man, test.fileNames, tid)
|
||||
assert.ErrorIs(t, err, test.expectErr)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func (suite *OperationsManifestsUnitSuite) TestVerifyDistinctBases() {
|
||||
ro := "resource_owner"
|
||||
|
||||
table := []struct {
|
||||
name string
|
||||
mans []*kopia.ManifestEntry
|
||||
expect assert.ErrorAssertionFunc
|
||||
}{
|
||||
{
|
||||
name: "one manifest, one reason",
|
||||
mans: []*kopia.ManifestEntry{
|
||||
{
|
||||
Manifest: &snapshot.Manifest{},
|
||||
Reasons: []kopia.Reason{
|
||||
{
|
||||
ResourceOwner: ro,
|
||||
Service: path.ExchangeService,
|
||||
Category: path.EmailCategory,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expect: assert.NoError,
|
||||
},
|
||||
{
|
||||
name: "one incomplete manifest",
|
||||
mans: []*kopia.ManifestEntry{
|
||||
{
|
||||
Manifest: &snapshot.Manifest{IncompleteReason: "ir"},
|
||||
},
|
||||
},
|
||||
expect: assert.NoError,
|
||||
},
|
||||
{
|
||||
name: "one manifest, multiple reasons",
|
||||
mans: []*kopia.ManifestEntry{
|
||||
{
|
||||
Manifest: &snapshot.Manifest{},
|
||||
Reasons: []kopia.Reason{
|
||||
{
|
||||
ResourceOwner: ro,
|
||||
Service: path.ExchangeService,
|
||||
Category: path.EmailCategory,
|
||||
},
|
||||
{
|
||||
ResourceOwner: ro,
|
||||
Service: path.ExchangeService,
|
||||
Category: path.ContactsCategory,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expect: assert.NoError,
|
||||
},
|
||||
{
|
||||
name: "one manifest, duplicate reasons",
|
||||
mans: []*kopia.ManifestEntry{
|
||||
{
|
||||
Manifest: &snapshot.Manifest{},
|
||||
Reasons: []kopia.Reason{
|
||||
{
|
||||
ResourceOwner: ro,
|
||||
Service: path.ExchangeService,
|
||||
Category: path.EmailCategory,
|
||||
},
|
||||
{
|
||||
ResourceOwner: ro,
|
||||
Service: path.ExchangeService,
|
||||
Category: path.EmailCategory,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expect: assert.Error,
|
||||
},
|
||||
{
|
||||
name: "two manifests, non-overlapping reasons",
|
||||
mans: []*kopia.ManifestEntry{
|
||||
{
|
||||
Manifest: &snapshot.Manifest{},
|
||||
Reasons: []kopia.Reason{
|
||||
{
|
||||
ResourceOwner: ro,
|
||||
Service: path.ExchangeService,
|
||||
Category: path.EmailCategory,
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Manifest: &snapshot.Manifest{},
|
||||
Reasons: []kopia.Reason{
|
||||
{
|
||||
ResourceOwner: ro,
|
||||
Service: path.ExchangeService,
|
||||
Category: path.ContactsCategory,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expect: assert.NoError,
|
||||
},
|
||||
{
|
||||
name: "two manifests, overlapping reasons",
|
||||
mans: []*kopia.ManifestEntry{
|
||||
{
|
||||
Manifest: &snapshot.Manifest{},
|
||||
Reasons: []kopia.Reason{
|
||||
{
|
||||
ResourceOwner: ro,
|
||||
Service: path.ExchangeService,
|
||||
Category: path.EmailCategory,
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Manifest: &snapshot.Manifest{},
|
||||
Reasons: []kopia.Reason{
|
||||
{
|
||||
ResourceOwner: ro,
|
||||
Service: path.ExchangeService,
|
||||
Category: path.EmailCategory,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expect: assert.Error,
|
||||
},
|
||||
{
|
||||
name: "two manifests, overlapping reasons, one snapshot incomplete",
|
||||
mans: []*kopia.ManifestEntry{
|
||||
{
|
||||
Manifest: &snapshot.Manifest{},
|
||||
Reasons: []kopia.Reason{
|
||||
{
|
||||
ResourceOwner: ro,
|
||||
Service: path.ExchangeService,
|
||||
Category: path.EmailCategory,
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Manifest: &snapshot.Manifest{IncompleteReason: "ir"},
|
||||
Reasons: []kopia.Reason{
|
||||
{
|
||||
ResourceOwner: ro,
|
||||
Service: path.ExchangeService,
|
||||
Category: path.EmailCategory,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expect: assert.NoError,
|
||||
},
|
||||
}
|
||||
for _, test := range table {
|
||||
suite.T().Run(test.name, func(t *testing.T) {
|
||||
err := verifyDistinctBases(test.mans)
|
||||
test.expect(t, err)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func (suite *OperationsManifestsUnitSuite) TestProduceManifestsAndMetadata() {
|
||||
const (
|
||||
ro = "resourceowner"
|
||||
tid = "tenantid"
|
||||
did = "detailsid"
|
||||
)
|
||||
|
||||
makeMan := func(pct path.CategoryType, id, incmpl, bid string) *kopia.ManifestEntry {
|
||||
tags := map[string]string{}
|
||||
if len(bid) > 0 {
|
||||
tags = map[string]string{"tag:" + kopia.TagBackupID: bid}
|
||||
}
|
||||
|
||||
return &kopia.ManifestEntry{
|
||||
Manifest: &snapshot.Manifest{
|
||||
ID: manifest.ID(id),
|
||||
IncompleteReason: incmpl,
|
||||
Tags: tags,
|
||||
},
|
||||
Reasons: []kopia.Reason{
|
||||
{
|
||||
ResourceOwner: ro,
|
||||
Service: path.ExchangeService,
|
||||
Category: pct,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
table := []struct {
|
||||
name string
|
||||
mr mockManifestRestorer
|
||||
gdi mockGetDetailsIDer
|
||||
reasons []kopia.Reason
|
||||
getMeta bool
|
||||
assertErr assert.ErrorAssertionFunc
|
||||
assertB assert.BoolAssertionFunc
|
||||
expectDCS []data.Collection
|
||||
expectNilMans bool
|
||||
}{
|
||||
{
|
||||
name: "don't get metadata, no mans",
|
||||
mr: mockManifestRestorer{
|
||||
mockRestorer: mockRestorer{},
|
||||
mans: []*kopia.ManifestEntry{},
|
||||
},
|
||||
gdi: mockGetDetailsIDer{detailsID: did},
|
||||
reasons: []kopia.Reason{},
|
||||
getMeta: false,
|
||||
assertErr: assert.NoError,
|
||||
assertB: assert.False,
|
||||
expectDCS: nil,
|
||||
},
|
||||
{
|
||||
name: "don't get metadata",
|
||||
mr: mockManifestRestorer{
|
||||
mockRestorer: mockRestorer{},
|
||||
mans: []*kopia.ManifestEntry{makeMan(path.EmailCategory, "", "", "")},
|
||||
},
|
||||
gdi: mockGetDetailsIDer{detailsID: did},
|
||||
reasons: []kopia.Reason{},
|
||||
getMeta: false,
|
||||
assertErr: assert.NoError,
|
||||
assertB: assert.False,
|
||||
expectDCS: nil,
|
||||
},
|
||||
{
|
||||
name: "don't get metadata, incomplete manifest",
|
||||
mr: mockManifestRestorer{
|
||||
mockRestorer: mockRestorer{},
|
||||
mans: []*kopia.ManifestEntry{makeMan(path.EmailCategory, "", "ir", "")},
|
||||
},
|
||||
gdi: mockGetDetailsIDer{detailsID: did},
|
||||
reasons: []kopia.Reason{},
|
||||
getMeta: false,
|
||||
assertErr: assert.NoError,
|
||||
assertB: assert.False,
|
||||
expectDCS: nil,
|
||||
},
|
||||
{
|
||||
name: "fetch manifests errors",
|
||||
mr: mockManifestRestorer{
|
||||
mockRestorer: mockRestorer{},
|
||||
mrErr: assert.AnError,
|
||||
},
|
||||
gdi: mockGetDetailsIDer{detailsID: did},
|
||||
reasons: []kopia.Reason{},
|
||||
getMeta: true,
|
||||
assertErr: assert.Error,
|
||||
assertB: assert.False,
|
||||
expectDCS: nil,
|
||||
},
|
||||
{
|
||||
name: "verify distinct bases fails",
|
||||
mr: mockManifestRestorer{
|
||||
mockRestorer: mockRestorer{},
|
||||
mans: []*kopia.ManifestEntry{
|
||||
makeMan(path.EmailCategory, "", "", ""),
|
||||
makeMan(path.EmailCategory, "", "", ""),
|
||||
},
|
||||
},
|
||||
gdi: mockGetDetailsIDer{detailsID: did},
|
||||
reasons: []kopia.Reason{},
|
||||
getMeta: true,
|
||||
assertErr: assert.NoError, // No error, even though verify failed.
|
||||
assertB: assert.False,
|
||||
expectDCS: nil,
|
||||
},
|
||||
{
|
||||
name: "no manifests",
|
||||
mr: mockManifestRestorer{
|
||||
mockRestorer: mockRestorer{},
|
||||
mans: []*kopia.ManifestEntry{},
|
||||
},
|
||||
gdi: mockGetDetailsIDer{detailsID: did},
|
||||
reasons: []kopia.Reason{},
|
||||
getMeta: true,
|
||||
assertErr: assert.NoError,
|
||||
assertB: assert.True,
|
||||
expectDCS: nil,
|
||||
},
|
||||
{
|
||||
name: "only incomplete manifests",
|
||||
mr: mockManifestRestorer{
|
||||
mockRestorer: mockRestorer{},
|
||||
mans: []*kopia.ManifestEntry{
|
||||
makeMan(path.EmailCategory, "", "ir", ""),
|
||||
makeMan(path.ContactsCategory, "", "ir", ""),
|
||||
},
|
||||
},
|
||||
gdi: mockGetDetailsIDer{detailsID: did},
|
||||
reasons: []kopia.Reason{},
|
||||
getMeta: true,
|
||||
assertErr: assert.NoError,
|
||||
assertB: assert.True,
|
||||
expectDCS: nil,
|
||||
},
|
||||
{
|
||||
name: "man missing backup id",
|
||||
mr: mockManifestRestorer{
|
||||
mockRestorer: mockRestorer{collsByID: map[string][]data.Collection{
|
||||
"id": {mockColl{id: "id_coll"}},
|
||||
}},
|
||||
mans: []*kopia.ManifestEntry{makeMan(path.EmailCategory, "id", "", "")},
|
||||
},
|
||||
gdi: mockGetDetailsIDer{detailsID: did},
|
||||
reasons: []kopia.Reason{},
|
||||
getMeta: true,
|
||||
assertErr: assert.Error,
|
||||
assertB: assert.False,
|
||||
expectNilMans: true,
|
||||
},
|
||||
{
|
||||
name: "backup missing details id",
|
||||
mr: mockManifestRestorer{
|
||||
mockRestorer: mockRestorer{},
|
||||
mans: []*kopia.ManifestEntry{makeMan(path.EmailCategory, "", "", "bid")},
|
||||
},
|
||||
gdi: mockGetDetailsIDer{},
|
||||
reasons: []kopia.Reason{},
|
||||
getMeta: true,
|
||||
assertErr: assert.NoError,
|
||||
assertB: assert.False,
|
||||
},
|
||||
{
|
||||
name: "one complete, one incomplete",
|
||||
mr: mockManifestRestorer{
|
||||
mockRestorer: mockRestorer{collsByID: map[string][]data.Collection{
|
||||
"id": {mockColl{id: "id_coll"}},
|
||||
"incmpl_id": {mockColl{id: "incmpl_id_coll"}},
|
||||
}},
|
||||
mans: []*kopia.ManifestEntry{
|
||||
makeMan(path.EmailCategory, "id", "", "bid"),
|
||||
makeMan(path.EmailCategory, "incmpl_id", "ir", ""),
|
||||
},
|
||||
},
|
||||
gdi: mockGetDetailsIDer{detailsID: did},
|
||||
reasons: []kopia.Reason{},
|
||||
getMeta: true,
|
||||
assertErr: assert.NoError,
|
||||
assertB: assert.True,
|
||||
expectDCS: []data.Collection{mockColl{id: "id_coll"}},
|
||||
},
|
||||
{
|
||||
name: "single valid man",
|
||||
mr: mockManifestRestorer{
|
||||
mockRestorer: mockRestorer{collsByID: map[string][]data.Collection{
|
||||
"id": {mockColl{id: "id_coll"}},
|
||||
}},
|
||||
mans: []*kopia.ManifestEntry{makeMan(path.EmailCategory, "id", "", "bid")},
|
||||
},
|
||||
gdi: mockGetDetailsIDer{detailsID: did},
|
||||
reasons: []kopia.Reason{},
|
||||
getMeta: true,
|
||||
assertErr: assert.NoError,
|
||||
assertB: assert.True,
|
||||
expectDCS: []data.Collection{mockColl{id: "id_coll"}},
|
||||
},
|
||||
{
|
||||
name: "multiple valid mans",
|
||||
mr: mockManifestRestorer{
|
||||
mockRestorer: mockRestorer{collsByID: map[string][]data.Collection{
|
||||
"mail": {mockColl{id: "mail_coll"}},
|
||||
"contact": {mockColl{id: "contact_coll"}},
|
||||
}},
|
||||
mans: []*kopia.ManifestEntry{
|
||||
makeMan(path.EmailCategory, "mail", "", "bid"),
|
||||
makeMan(path.ContactsCategory, "contact", "", "bid"),
|
||||
},
|
||||
},
|
||||
gdi: mockGetDetailsIDer{detailsID: did},
|
||||
reasons: []kopia.Reason{},
|
||||
getMeta: true,
|
||||
assertErr: assert.NoError,
|
||||
assertB: assert.True,
|
||||
expectDCS: []data.Collection{
|
||||
mockColl{id: "mail_coll"},
|
||||
mockColl{id: "contact_coll"},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "error collecting metadata",
|
||||
mr: mockManifestRestorer{
|
||||
mockRestorer: mockRestorer{err: assert.AnError},
|
||||
mans: []*kopia.ManifestEntry{makeMan(path.EmailCategory, "", "", "bid")},
|
||||
},
|
||||
gdi: mockGetDetailsIDer{detailsID: did},
|
||||
reasons: []kopia.Reason{},
|
||||
getMeta: true,
|
||||
assertErr: assert.Error,
|
||||
assertB: assert.False,
|
||||
expectDCS: nil,
|
||||
expectNilMans: true,
|
||||
},
|
||||
}
|
||||
for _, test := range table {
|
||||
suite.T().Run(test.name, func(t *testing.T) {
|
||||
ctx, flush := tester.NewContext()
|
||||
defer flush()
|
||||
|
||||
mans, dcs, b, err := produceManifestsAndMetadata(
|
||||
ctx,
|
||||
&test.mr,
|
||||
&test.gdi,
|
||||
test.reasons,
|
||||
tid,
|
||||
test.getMeta,
|
||||
)
|
||||
test.assertErr(t, err)
|
||||
test.assertB(t, b)
|
||||
|
||||
expectMans := test.mr.mans
|
||||
if test.expectNilMans {
|
||||
expectMans = nil
|
||||
}
|
||||
assert.Equal(t, expectMans, mans)
|
||||
|
||||
expect, got := []string{}, []string{}
|
||||
|
||||
for _, dc := range test.expectDCS {
|
||||
mc, ok := dc.(mockColl)
|
||||
assert.True(t, ok)
|
||||
|
||||
expect = append(expect, mc.id)
|
||||
}
|
||||
|
||||
for _, dc := range dcs {
|
||||
mc, ok := dc.(mockColl)
|
||||
assert.True(t, ok)
|
||||
|
||||
got = append(got, mc.id)
|
||||
}
|
||||
|
||||
assert.ElementsMatch(t, expect, got, "expected collections are present")
|
||||
})
|
||||
}
|
||||
}
|
||||
@ -94,7 +94,7 @@ func connectToM365(
|
||||
sel selectors.Selector,
|
||||
acct account.Account,
|
||||
) (*connector.GraphConnector, error) {
|
||||
complete, closer := observe.MessageWithCompletion("Connecting to M365:")
|
||||
complete, closer := observe.MessageWithCompletion(ctx, "Connecting to M365")
|
||||
defer func() {
|
||||
complete <- struct{}{}
|
||||
close(complete)
|
||||
|
||||
@ -159,9 +159,9 @@ func (op *RestoreOperation) Run(ctx context.Context) (restoreDetails *details.De
|
||||
return nil, err
|
||||
}
|
||||
|
||||
observe.Message(fmt.Sprintf("Discovered %d items in backup %s to restore", len(paths), op.BackupID))
|
||||
observe.Message(ctx, fmt.Sprintf("Discovered %d items in backup %s to restore", len(paths), op.BackupID))
|
||||
|
||||
kopiaComplete, closer := observe.MessageWithCompletion("Enumerating items in repository:")
|
||||
kopiaComplete, closer := observe.MessageWithCompletion(ctx, "Enumerating items in repository")
|
||||
defer closer()
|
||||
defer close(kopiaComplete)
|
||||
|
||||
@ -183,7 +183,7 @@ func (op *RestoreOperation) Run(ctx context.Context) (restoreDetails *details.De
|
||||
return nil, opStats.readErr
|
||||
}
|
||||
|
||||
restoreComplete, closer := observe.MessageWithCompletion("Restoring data:")
|
||||
restoreComplete, closer := observe.MessageWithCompletion(ctx, "Restoring data")
|
||||
defer closer()
|
||||
defer close(restoreComplete)
|
||||
|
||||
|
||||
@ -173,7 +173,7 @@ func (b *Builder) AddFoldersForItem(folders []folderEntry, itemInfo ItemInfo, up
|
||||
}
|
||||
|
||||
// Update the folder's size and modified time
|
||||
itemModified := itemInfo.modified()
|
||||
itemModified := itemInfo.Modified()
|
||||
|
||||
folder.Info.Folder.Size += itemInfo.size()
|
||||
|
||||
@ -381,7 +381,7 @@ func (i ItemInfo) size() int64 {
|
||||
return 0
|
||||
}
|
||||
|
||||
func (i ItemInfo) modified() time.Time {
|
||||
func (i ItemInfo) Modified() time.Time {
|
||||
switch {
|
||||
case i.Exchange != nil:
|
||||
return i.Exchange.Modified
|
||||
@ -477,6 +477,7 @@ func (i ExchangeInfo) Values() []string {
|
||||
type SharePointInfo struct {
|
||||
Created time.Time `json:"created,omitempty"`
|
||||
ItemName string `json:"itemName,omitempty"`
|
||||
DriveName string `json:"driveName,omitempty"`
|
||||
ItemType ItemType `json:"itemType,omitempty"`
|
||||
Modified time.Time `josn:"modified,omitempty"`
|
||||
Owner string `json:"owner,omitempty"`
|
||||
@ -488,7 +489,7 @@ type SharePointInfo struct {
|
||||
// Headers returns the human-readable names of properties in a SharePointInfo
|
||||
// for printing out to a terminal in a columnar display.
|
||||
func (i SharePointInfo) Headers() []string {
|
||||
return []string{"ItemName", "ParentPath", "Size", "WebURL", "Created", "Modified"}
|
||||
return []string{"ItemName", "Drive", "ParentPath", "Size", "WebURL", "Created", "Modified"}
|
||||
}
|
||||
|
||||
// Values returns the values matching the Headers list for printing
|
||||
@ -496,6 +497,7 @@ func (i SharePointInfo) Headers() []string {
|
||||
func (i SharePointInfo) Values() []string {
|
||||
return []string{
|
||||
i.ItemName,
|
||||
i.DriveName,
|
||||
i.ParentPath,
|
||||
humanize.Bytes(uint64(i.Size)),
|
||||
i.WebURL,
|
||||
@ -518,8 +520,8 @@ func (i *SharePointInfo) UpdateParentPath(newPath path.Path) error {
|
||||
// OneDriveInfo describes a oneDrive item
|
||||
type OneDriveInfo struct {
|
||||
Created time.Time `json:"created,omitempty"`
|
||||
ItemName string `json:"itemName"`
|
||||
DriveName string `json:"driveName"`
|
||||
ItemName string `json:"itemName,omitempty"`
|
||||
DriveName string `json:"driveName,omitempty"`
|
||||
ItemType ItemType `json:"itemType,omitempty"`
|
||||
Modified time.Time `json:"modified,omitempty"`
|
||||
Owner string `json:"owner,omitempty"`
|
||||
|
||||
@ -107,13 +107,23 @@ func (suite *DetailsUnitSuite) TestDetailsEntry_HeadersValues() {
|
||||
ParentPath: "parentPath",
|
||||
Size: 1000,
|
||||
WebURL: "https://not.a.real/url",
|
||||
DriveName: "aDrive",
|
||||
Created: now,
|
||||
Modified: now,
|
||||
},
|
||||
},
|
||||
},
|
||||
expectHs: []string{"ID", "ItemName", "ParentPath", "Size", "WebURL", "Created", "Modified"},
|
||||
expectVs: []string{"deadbeef", "itemName", "parentPath", "1.0 kB", "https://not.a.real/url", nowStr, nowStr},
|
||||
expectHs: []string{"ID", "ItemName", "Drive", "ParentPath", "Size", "WebURL", "Created", "Modified"},
|
||||
expectVs: []string{
|
||||
"deadbeef",
|
||||
"itemName",
|
||||
"aDrive",
|
||||
"parentPath",
|
||||
"1.0 kB",
|
||||
"https://not.a.real/url",
|
||||
nowStr,
|
||||
nowStr,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "oneDrive info",
|
||||
|
||||
@ -3,6 +3,8 @@ package logger
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"time"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/pflag"
|
||||
@ -10,6 +12,9 @@ import (
|
||||
"go.uber.org/zap/zapcore"
|
||||
)
|
||||
|
||||
// Default location for writing logs, initialized in platform specific files
|
||||
var userLogsDir string
|
||||
|
||||
var (
|
||||
logCore *zapcore.Core
|
||||
loggerton *zap.SugaredLogger
|
||||
@ -17,6 +22,9 @@ var (
|
||||
// logging level flag
|
||||
llFlag = "info"
|
||||
|
||||
// logging file flags
|
||||
lfFlag = ""
|
||||
|
||||
DebugAPI bool
|
||||
readableOutput bool
|
||||
)
|
||||
@ -34,17 +42,26 @@ const (
|
||||
const (
|
||||
debugAPIFN = "debug-api-calls"
|
||||
logLevelFN = "log-level"
|
||||
logFileFN = "log-file"
|
||||
readableLogsFN = "readable-logs"
|
||||
)
|
||||
|
||||
// adds the persistent flag --log-level to the provided command.
|
||||
// defaults to "info".
|
||||
// Returns the default location for writing logs
|
||||
func defaultLogLocation() string {
|
||||
return filepath.Join(userLogsDir, "corso", "logs", time.Now().UTC().Format("2006-01-02T15-04-05Z")+".log")
|
||||
}
|
||||
|
||||
// adds the persistent flag --log-level and --log-file to the provided command.
|
||||
// defaults to "info" and the default log location.
|
||||
// This is a hack for help displays. Due to seeding the context, we also
|
||||
// need to parse the log level before we execute the command.
|
||||
func AddLogLevelFlag(cmd *cobra.Command) {
|
||||
func AddLoggingFlags(cmd *cobra.Command) {
|
||||
fs := cmd.PersistentFlags()
|
||||
fs.StringVar(&llFlag, logLevelFN, "info", "set the log level to debug|info|warn|error")
|
||||
|
||||
// The default provided here is only for help info
|
||||
fs.StringVar(&lfFlag, logFileFN, "corso-<timestamp>.log", "location for writing logs, use '-' for stdout")
|
||||
|
||||
fs.Bool(debugAPIFN, false, "add non-2xx request/response errors to logging")
|
||||
|
||||
fs.Bool(
|
||||
@ -54,13 +71,17 @@ func AddLogLevelFlag(cmd *cobra.Command) {
|
||||
fs.MarkHidden(readableLogsFN)
|
||||
}
|
||||
|
||||
// Due to races between the lazy evaluation of flags in cobra and the need to init logging
|
||||
// behavior in a ctx, log-level gets pre-processed manually here using pflags. The canonical
|
||||
// AddLogLevelFlag() ensures the flag is displayed as part of the help/usage output.
|
||||
func PreloadLogLevel() string {
|
||||
// Due to races between the lazy evaluation of flags in cobra and the
|
||||
// need to init logging behavior in a ctx, log-level and log-file gets
|
||||
// pre-processed manually here using pflags. The canonical
|
||||
// AddLogLevelFlag() and AddLogFileFlag() ensures the flags are
|
||||
// displayed as part of the help/usage output.
|
||||
func PreloadLoggingFlags() (string, string) {
|
||||
dlf := defaultLogLocation()
|
||||
fs := pflag.NewFlagSet("seed-logger", pflag.ContinueOnError)
|
||||
fs.ParseErrorsWhitelist.UnknownFlags = true
|
||||
fs.String(logLevelFN, "info", "set the log level to debug|info|warn|error")
|
||||
fs.String(logFileFN, dlf, "location for writing logs")
|
||||
fs.BoolVar(&DebugAPI, debugAPIFN, false, "add non-2xx request/response errors to logging")
|
||||
fs.BoolVar(&readableOutput, readableLogsFN, false, "minimizes log output: removes the file and date, colors the level")
|
||||
// prevents overriding the corso/cobra help processor
|
||||
@ -68,20 +89,40 @@ func PreloadLogLevel() string {
|
||||
|
||||
// parse the os args list to find the log level flag
|
||||
if err := fs.Parse(os.Args[1:]); err != nil {
|
||||
return "info"
|
||||
return "info", dlf
|
||||
}
|
||||
|
||||
// retrieve the user's preferred log level
|
||||
// automatically defaults to "info"
|
||||
levelString, err := fs.GetString(logLevelFN)
|
||||
if err != nil {
|
||||
return "info"
|
||||
return "info", dlf
|
||||
}
|
||||
|
||||
return levelString
|
||||
// retrieve the user's preferred log file location
|
||||
// automatically defaults to default log location
|
||||
logfile, err := fs.GetString(logFileFN)
|
||||
if err != nil {
|
||||
return "info", dlf
|
||||
}
|
||||
|
||||
if logfile == "-" {
|
||||
logfile = "stdout"
|
||||
}
|
||||
|
||||
if logfile != "stdout" && logfile != "stderr" {
|
||||
logdir := filepath.Dir(logfile)
|
||||
|
||||
err := os.MkdirAll(logdir, 0o755)
|
||||
if err != nil {
|
||||
return "info", "stderr"
|
||||
}
|
||||
}
|
||||
|
||||
return levelString, logfile
|
||||
}
|
||||
|
||||
func genLogger(level logLevel) (*zapcore.Core, *zap.SugaredLogger) {
|
||||
func genLogger(level logLevel, logfile string) (*zapcore.Core, *zap.SugaredLogger) {
|
||||
// when testing, ensure debug logging matches the test.v setting
|
||||
for _, arg := range os.Args {
|
||||
if arg == `--test.v=true` {
|
||||
@ -136,20 +177,23 @@ func genLogger(level logLevel) (*zapcore.Core, *zap.SugaredLogger) {
|
||||
cfg.EncoderConfig.EncodeLevel = zapcore.CapitalColorLevelEncoder
|
||||
}
|
||||
|
||||
cfg.OutputPaths = []string{logfile}
|
||||
lgr, err = cfg.Build(opts...)
|
||||
} else {
|
||||
lgr, err = zap.NewProduction()
|
||||
cfg := zap.NewProductionConfig()
|
||||
cfg.OutputPaths = []string{logfile}
|
||||
lgr, err = cfg.Build()
|
||||
}
|
||||
|
||||
// fall back to the core config if the default creation fails
|
||||
if err != nil {
|
||||
lgr = zap.New(*logCore)
|
||||
lgr = zap.New(core)
|
||||
}
|
||||
|
||||
return &core, lgr.Sugar()
|
||||
}
|
||||
|
||||
func singleton(level logLevel) *zap.SugaredLogger {
|
||||
func singleton(level logLevel, logfile string) *zap.SugaredLogger {
|
||||
if loggerton != nil {
|
||||
return loggerton
|
||||
}
|
||||
@ -161,7 +205,7 @@ func singleton(level logLevel) *zap.SugaredLogger {
|
||||
return loggerton
|
||||
}
|
||||
|
||||
logCore, loggerton = genLogger(level)
|
||||
logCore, loggerton = genLogger(level, logfile)
|
||||
|
||||
return loggerton
|
||||
}
|
||||
@ -178,12 +222,12 @@ const ctxKey loggingKey = "corsoLogger"
|
||||
// It also parses the command line for flag values prior to executing
|
||||
// cobra. This early parsing is necessary since logging depends on
|
||||
// a seeded context prior to cobra evaluating flags.
|
||||
func Seed(ctx context.Context, lvl string) (context.Context, *zap.SugaredLogger) {
|
||||
func Seed(ctx context.Context, lvl, logfile string) (context.Context, *zap.SugaredLogger) {
|
||||
if len(lvl) == 0 {
|
||||
lvl = "info"
|
||||
}
|
||||
|
||||
zsl := singleton(levelOf(lvl))
|
||||
zsl := singleton(levelOf(lvl), logfile)
|
||||
|
||||
return Set(ctx, zsl), zsl
|
||||
}
|
||||
@ -192,7 +236,7 @@ func Seed(ctx context.Context, lvl string) (context.Context, *zap.SugaredLogger)
|
||||
func SeedLevel(ctx context.Context, level logLevel) (context.Context, *zap.SugaredLogger) {
|
||||
l := ctx.Value(ctxKey)
|
||||
if l == nil {
|
||||
zsl := singleton(level)
|
||||
zsl := singleton(level, defaultLogLocation())
|
||||
return Set(ctx, zsl), zsl
|
||||
}
|
||||
|
||||
@ -212,7 +256,7 @@ func Set(ctx context.Context, logger *zap.SugaredLogger) context.Context {
|
||||
func Ctx(ctx context.Context) *zap.SugaredLogger {
|
||||
l := ctx.Value(ctxKey)
|
||||
if l == nil {
|
||||
return singleton(levelOf(llFlag))
|
||||
return singleton(levelOf(llFlag), defaultLogLocation())
|
||||
}
|
||||
|
||||
return l.(*zap.SugaredLogger)
|
||||
|
||||
10
src/pkg/logger/logpath_darwin.go
Normal file
10
src/pkg/logger/logpath_darwin.go
Normal file
@ -0,0 +1,10 @@
|
||||
package logger
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
)
|
||||
|
||||
func init() {
|
||||
userLogsDir = filepath.Join(os.Getenv("HOME"), "Library", "Logs")
|
||||
}
|
||||
9
src/pkg/logger/logpath_windows.go
Normal file
9
src/pkg/logger/logpath_windows.go
Normal file
@ -0,0 +1,9 @@
|
||||
package logger
|
||||
|
||||
import (
|
||||
"os"
|
||||
)
|
||||
|
||||
func init() {
|
||||
userLogsDir = os.Getenv("LOCALAPPDATA")
|
||||
}
|
||||
17
src/pkg/logger/logpath_xdg.go
Normal file
17
src/pkg/logger/logpath_xdg.go
Normal file
@ -0,0 +1,17 @@
|
||||
//go:build !windows && !darwin
|
||||
// +build !windows,!darwin
|
||||
|
||||
package logger
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
)
|
||||
|
||||
func init() {
|
||||
if os.Getenv("XDG_CACHE_HOME") != "" {
|
||||
userLogsDir = os.Getenv("XDG_CACHE_HOME")
|
||||
} else {
|
||||
userLogsDir = filepath.Join(os.Getenv("HOME"), ".cache")
|
||||
}
|
||||
}
|
||||
@ -15,12 +15,13 @@ func _() {
|
||||
_ = x[FilesCategory-4]
|
||||
_ = x[ListsCategory-5]
|
||||
_ = x[LibrariesCategory-6]
|
||||
_ = x[DetailsCategory-7]
|
||||
_ = x[PagesCategory-7]
|
||||
_ = x[DetailsCategory-8]
|
||||
}
|
||||
|
||||
const _CategoryType_name = "UnknownCategoryemailcontactseventsfileslistslibrariesdetails"
|
||||
const _CategoryType_name = "UnknownCategoryemailcontactseventsfileslistslibrariespagesdetails"
|
||||
|
||||
var _CategoryType_index = [...]uint8{0, 15, 20, 28, 34, 39, 44, 53, 60}
|
||||
var _CategoryType_index = [...]uint8{0, 15, 20, 28, 34, 39, 44, 53, 58, 65}
|
||||
|
||||
func (i CategoryType) String() string {
|
||||
if i < 0 || i >= CategoryType(len(_CategoryType_index)-1) {
|
||||
|
||||
@ -65,6 +65,7 @@ const (
|
||||
FilesCategory // files
|
||||
ListsCategory // lists
|
||||
LibrariesCategory // libraries
|
||||
PagesCategory // pages
|
||||
DetailsCategory // details
|
||||
)
|
||||
|
||||
@ -82,6 +83,8 @@ func ToCategoryType(category string) CategoryType {
|
||||
return LibrariesCategory
|
||||
case ListsCategory.String():
|
||||
return ListsCategory
|
||||
case PagesCategory.String():
|
||||
return PagesCategory
|
||||
case DetailsCategory.String():
|
||||
return DetailsCategory
|
||||
default:
|
||||
@ -103,6 +106,7 @@ var serviceCategories = map[ServiceType]map[CategoryType]struct{}{
|
||||
SharePointService: {
|
||||
LibrariesCategory: {},
|
||||
ListsCategory: {},
|
||||
PagesCategory: {},
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@ -116,6 +116,13 @@ var (
|
||||
return pb.ToDataLayerSharePointPath(tenant, site, path.ListsCategory, isItem)
|
||||
},
|
||||
},
|
||||
{
|
||||
service: path.SharePointService,
|
||||
category: path.PagesCategory,
|
||||
pathFunc: func(pb *path.Builder, tenant, site string, isItem bool) (path.Path, error) {
|
||||
return pb.ToDataLayerSharePointPath(tenant, site, path.PagesCategory, isItem)
|
||||
},
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
@ -300,6 +307,13 @@ func (suite *DataLayerResourcePath) TestToServiceCategoryMetadataPath() {
|
||||
expectedService: path.SharePointMetadataService,
|
||||
check: assert.NoError,
|
||||
},
|
||||
{
|
||||
name: "Passes",
|
||||
service: path.SharePointService,
|
||||
category: path.PagesCategory,
|
||||
expectedService: path.SharePointMetadataService,
|
||||
check: assert.NoError,
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range table {
|
||||
|
||||
@ -154,7 +154,7 @@ func Connect(
|
||||
// their output getting clobbered (#1720)
|
||||
defer observe.Complete()
|
||||
|
||||
complete, closer := observe.MessageWithCompletion("Connecting to repository:")
|
||||
complete, closer := observe.MessageWithCompletion(ctx, "Connecting to repository")
|
||||
defer closer()
|
||||
defer close(complete)
|
||||
|
||||
|
||||
87
website/blog/2023-1-4-backups-on-your-coffee-break.md
Normal file
87
website/blog/2023-1-4-backups-on-your-coffee-break.md
Normal file
@ -0,0 +1,87 @@
|
||||
---
|
||||
slug: backups-on-your-coffee-break
|
||||
title: "How to Back Up Your Microsoft 365 Data During Your Coffee Break"
|
||||
description: "A quick guide to using Corso for data backups"
|
||||
authors: nica
|
||||
tags: [corso, microsoft 365, backups]
|
||||
date: 2023-1-12
|
||||
image: ./images/coffee_break.jpg
|
||||
---
|
||||
|
||||

|
||||
|
||||
It’s 10:00 in the morning, and you need coffee and a snack.
|
||||
You know you’re supposed to back up the company’s Microsoft 365 instance, but it takes so long! Surely a quick
|
||||
break won’t matter.
|
||||
|
||||
Wrong! While you were in the break room,
|
||||
your organization was hit with a malware attack that wiped out many critical files and spreadsheets in minutes.
|
||||
Now your cell phone’s ringing off the hook.
|
||||
Slapping your forehead with the palm of your hand, you shout,
|
||||
“If only backups were faster and easier!”
|
||||
|
||||
<!-- truncate -->
|
||||
|
||||
Regular backups are increasingly important and must be prioritized; even over your coffee break. A recent study by
|
||||
[Arlington Research](https://www.businesswire.com/news/home/20210511005132/en/An-Alarming-85-of-Organizations-Using-Microsoft-365-Have-Suffered-Email-Data-Breaches-Research-by-Egress-Reveals#:~:text=15%25%20of%20organizations%20using%20Microsoft,data%20in%20error%20via%20email.)
|
||||
found that 85% of organizations using Microsoft 365 suffered email data breaches in the six months prior to May 2021.
|
||||
And it’s not just malware that threatens to corrupt data; downtime can have equally devastating impacts.
|
||||
[Two out of every five servers](https://www.veeam.com/blog/data-loss-2022.html)
|
||||
experienced an outage over the past 12 months.
|
||||
Data can also be lost or corrupted during poorly executed migrations or the cancellation of a software license
|
||||
or by human error. And once it’s gone, it’s gone, unless you’ve backed it up.
|
||||
Think you can just move stuff back out of the recycling bin? Think again. Ransomware will also clear your recycling bin,
|
||||
even Microsoft recommends [emptying it out regularly](https://learn.microsoft.com/en-us/office365/servicedescriptions/sharepoint-online-service-description/sharepoint-online-limits).
|
||||
Use of other tools like 'holds' [also have their limits](https://learn.microsoft.com/en-us/office365/servicedescriptions/sharepoint-online-service-description/sharepoint-online-limits#hold-limits)
|
||||
(and really they're intended for e-discovery),
|
||||
and are no substitutes for true backups.
|
||||
|
||||
The question really is: why wouldn’t you back up your Microsoft 365 data?
|
||||
|
||||
IDC estimates that [six out of every 10 organizations](https://www.dsm.net/idc-why-backup-for-office-365-is-essential)
|
||||
don’t have a data protection plan for their Microsoft 365 data.
|
||||
Why? Because, historically, Microsoft 365 backups have been slow,
|
||||
tedious and expensive, requiring complex workflows and scripts, and constant supervision:
|
||||
|
||||
- Companies often face physical limitations of their storage devices, such as servers, external hard drives, or other media.
|
||||
They may have to choose what data to backup or compromise on how often they back it up.
|
||||
- Backups can be time-consuming, especially without automation.
|
||||
Often, someone has to monitor the process to address any issues that arise. With their to-do list growing day by day,
|
||||
IT security teams must often prioritize more urgent work.
|
||||
- Manual backups aren’t just slow and tedious, they’re unreliable. When work is busy, or when your employee’s stomach
|
||||
is growling -it may be pushed to the bottom of the priority list.
|
||||
|
||||
Considering these challenges, it’s clear to see why an IT security staffer might put backups on the back burner.
|
||||
|
||||
## A Faster, Easier Way to Back up Your Data
|
||||
|
||||
Fortunately, [Corso](https://corsobackup.io/), a free and open-source tool, is enabling IT administrators to backup all
|
||||
their M365 data during their morning coffee break -or while their lunch is in the microwave. Here’s how:
|
||||
|
||||
- Purpose-built for Microsoft 365, Corso provides comprehensive backup and restore workflows that slash backup time and overhead.
|
||||
- It’s free: because Corso is 100% open-source. Flexible retention policies reduce storage costs, as well. Corso works
|
||||
with any S3-compatible object storage system, including AWS, Google Cloud, Backblaze and Azure Blob.
|
||||
- It’s fast! Corso doesn’t use unreliable scripts or workarounds. Instead,
|
||||
its automated, high-throughput, high-tolerance backups feature end-to-end encryption, deduplication and compression.
|
||||
Corso is written in Go, a modern programming language that came out of Google that has been purpose-built for systems programming.
|
||||
A typical Corso backup takes just a few minutes- and you can drink your coffee while it’s running!
|
||||
|
||||
How do you backup your data with Corso? It takes just a few minutes to get started. Check out the [Quick Start](https://corsobackup.io/docs/quickstart/)
|
||||
guide for a step-by-step walk through:
|
||||
|
||||
1. Download Corso
|
||||
|
||||
1. Connect to Microsoft 365
|
||||
|
||||
1. Create a Corso repository
|
||||
|
||||
1. Create your backup
|
||||
|
||||
And here’s my [video](https://youtu.be/mlwfEbPqD94) showing how the steps take less than 4 minutes.
|
||||
|
||||
Yep, that’s it. With these few steps, Corso protects your team’s data from accidental loss, deletion, server downtime,
|
||||
security threats and ransomware. Don’t leave Microsoft 365 data protection to chance
|
||||
-and use your coffee break to relax instead of
|
||||
worry!
|
||||
|
||||
Give [Corso](https://corsobackup.io/) a try, and then tell us what you think. Find the Corso community on [Discord](https://discord.gg/63DTTSnuhT).
|
||||
BIN
website/blog/images/coffee_break.jpg
Normal file
BIN
website/blog/images/coffee_break.jpg
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 261 KiB |
@ -18,6 +18,13 @@ If you don't have Go available, you can find installation instructions [here](ht
|
||||
|
||||
This will generate a binary named `corso` in the directory where you run the build.
|
||||
|
||||
:::note
|
||||
You can download binary artifacts of the latest commit from GitHub by
|
||||
navigating to the "Summary" page of the `Build/Release Corso` CI job
|
||||
that was run for that commit.
|
||||
You will find the artifacts at the bottom of the page.
|
||||
:::
|
||||
|
||||
### Building via Docker
|
||||
|
||||
For convenience, the Corso build tooling is containerized. To take advantage, you need
|
||||
|
||||
@ -126,3 +126,32 @@ directory within the container.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Log Files
|
||||
|
||||
The default location of Corso's log file is shown below but the location can be overridden by using the `--log-file` flag.
|
||||
You can also use `stdout` or `stderr` as the `--log-file` location to redirect the logs to "stdout" and "stderr" respectively.
|
||||
|
||||
<Tabs groupId="os">
|
||||
<TabItem value="win" label="Windows">
|
||||
|
||||
```powershell
|
||||
%LocalAppData%\corso\logs\<timestamp>.log
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="unix" label="Linux">
|
||||
|
||||
```bash
|
||||
$HOME/.cache/corso/logs/<timestamp>.log
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="macos" label="macOS">
|
||||
|
||||
```bash
|
||||
$HOME/Library/Logs/corso/logs/<timestamp>.log
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
@ -4,4 +4,5 @@ You can learn more about the Corso roadmap and how to interpret it [here](https:
|
||||
|
||||
If you run into a bug or have feature requests, please file a [GitHub issue](https://github.com/alcionai/corso/issues/)
|
||||
and attach the `bug` or `enhancement` label to the issue. When filing bugs, please run Corso with `--log-level debug`
|
||||
and add the logs to the bug report.
|
||||
and add the logs to the bug report. You can find more information about where logs are stored in the
|
||||
[log files](../../setup/configuration/#log-files) section in setup docs.
|
||||
|
||||
@ -35,3 +35,5 @@ cyberattack
|
||||
Atlassian
|
||||
SLAs
|
||||
runbooks
|
||||
stdout
|
||||
stderr
|
||||
Loading…
x
Reference in New Issue
Block a user