## Description
Added retry handling for delta queries in OneDrive. Also, bumping time timeout for graph api calls from 90s to 3m as we were seeing client timeouts for graph api calls. ~Haven't added retry for every request in exchange as I'm hoping https://github.com/alcionai/corso/issues/2287 will be a better way to handle this.~
## Does this PR need a docs update or release note?
- [x] ✅ Yes, it's included
- [ ] 🕐 Yes, but in a later PR
- [ ] ⛔ No
## Type of change
<!--- Please check the type of change your PR introduces: --->
- [ ] 🌻 Feature
- [x] 🐛 Bugfix
- [ ] 🗺️ Documentation
- [ ] 🤖 Test
- [ ] 💻 CI/Deployment
- [ ] 🧹 Tech Debt/Cleanup
## Issue(s)
<!-- Can reference multiple issues. Use one of the following "magic words" - "closes, fixes" to auto-close the Github issue. -->
* #<issue>
## Test Plan
<!-- How will this be tested prior to merging.-->
- [x] 💪 Manual
- [ ] ⚡ Unit test
- [ ] 💚 E2E
## Description
The current interface makes no sense. This
change refactors the funcs to standard boolean
comparators.
## Does this PR need a docs update or release note?
- [x] ⛔ No
## Type of change
- [x] 🧹 Tech Debt/Cleanup
## Description
If a drive item goes over its 1 hour jwt expiration to download the backing file, re-fetch the item
and use the new download url to get the file.
## Does this PR need a docs update or release note?
- [x] ⛔ No
## Type of change
- [x] 🌻 Feature
## Issue(s)
* #2267
## Test Plan
- [x] 💪 Manual
## Description
This is a quick hack to satisfy a primary case of PII scrubbing. We expect to revisit it in the future.
## Does this PR need a docs update or release note?
- [x] ⛔ No
## Type of change
- [x] 🌻 Feature
## Issue(s)
* #2284
## Test Plan
- [x] 💪 Manual
- [x] ⚡ Unit test
## Description
Wraps the onedrive item download control into
the lazy reader creation, so that items are not
fetched from graph until kopia decides it wants
the bytes. This should only occurr after other
checks, like mod-time comparison, have passed,
thus giving us kopia-assists for bakup.
## Does this PR need a docs update or release note?
- [x] ⛔ No
## Type of change
- [x] 🐛 Bugfix
## Issue(s)
* closes#2262
## Test Plan
- [x] ⚡ Unit test
- [x] 💚 E2E
## Description
This PR reintroduces the changes from #2266 with a change to *not* reset the transport
when initializing the shared client.
Doing so was removing the retry and other middlewares
and also resulting in throttled requests being masked as success
Also - we now decorate our download traffic with an ISV tag as recommended [here](https://learn.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online#how-to-decorate-your-http-traffic)
## Does this PR need a docs update or release note?
- [ ] ✅ Yes, it's included
- [x] 🕐 Yes, but in a later PR
- [ ] ⛔ No
## Type of change
<!--- Please check the type of change your PR introduces: --->
- [ ] 🌻 Feature
- [x] 🐛 Bugfix
- [ ] 🗺️ Documentation
- [ ] 🤖 Test
- [ ] 💻 CI/Deployment
- [ ] 🧹 Tech Debt/Cleanup
## Issue(s)
<!-- Can reference multiple issues. Use one of the following "magic words" - "closes, fixes" to auto-close the Github issue. -->
* #2266
## Test Plan
<!-- How will this be tested prior to merging.-->
- [ ] 💪 Manual
- [ ] ⚡ Unit test
- [x] 💚 E2E
## Description
onedrive currently constructs a new http client
for every file it downloads. This causes the OS
to generate extra sockets, and hang onto them
after the download is complete. Replacing these
one-off clients with a singular, re-used client-
which is the behavior and standard suggested
for golang http clients- minimizes system
resource consumption.
## Does this PR need a docs update or release note?
- [x] ⛔ No
## Type of change
- [x] 🐛 Bugfix
## Issue(s)
* closes#2262
## Test Plan
- [x] 💪 Manual
- [x] ⚡ Unit test
- [x] 💚 E2E
## Description
add logging to the observe package, assume that
every instance where a message is observed, it
also gets logged.
Merger may want to wait until logging to a file is the standard behavior, else the terminal might get messy/buggy.
## Does this PR need a docs update or release note?
- [x] ⛔ No
## Type of change
- [x] 🧹 Tech Debt/Cleanup
## Issue(s)
* closes#2061
## Test Plan
- [x] 💪 Manual
## Description
Reenable kopia-assisted incrementals now that #2136 contains the bugfix patch
## Does this PR need a docs update or release note?
- [ ] ✅ Yes, it's included
- [ ] 🕐 Yes, but in a later PR
- [x] ⛔ No
## Type of change
- [ ] 🌻 Feature
- [x] 🐛 Bugfix
- [ ] 🗺️ Documentation
- [ ] 🤖 Test
- [ ] 💻 CI/Deployment
- [ ] 🧹 Tech Debt/Cleanup
## Issue(s)
* closes#2133
## Test Plan
- [x] 💪 Manual
- [ ] ⚡ Unit test
- [ ] 💚 E2E
## Description
Kopia-assisted incremental backups do not appear to have the correct file size set on restore
## Does this PR need a docs update or release note?
- [ ] ✅ Yes, it's included
- [ ] 🕐 Yes, but in a later PR
- [x] ⛔ No
## Type of change
<!--- Please check the type of change your PR introduces: --->
- [ ] 🌻 Feature
- [x] 🐛 Bugfix
- [ ] 🗺️ Documentation
- [ ] 🤖 Test
- [ ] 💻 CI/Deployment
- [ ] 🧹 Tech Debt/Cleanup
## Issue(s)
<!-- Can reference multiple issues. Use one of the following "magic words" - "closes, fixes" to auto-close the Github issue. -->
* #<issue>
## Test Plan
<!-- How will this be tested prior to merging.-->
- [x] 💪 Manual
- [ ] ⚡ Unit test
- [ ] 💚 E2E
## Description
This addresses the deadlock in the item progress reader by deferring the reader creation to
when the first read is issued for the item
## Does this PR need a docs update or release note?
- [ ] ✅ Yes, it's included
- [x] 🕐 Yes, but in a later PR
- [ ] ⛔ No
## Type of change
<!--- Please check the type of change your PR introduces: --->
- [ ] 🌻 Feature
- [x] 🐛 Bugfix
- [ ] 🗺️ Documentation
- [ ] 🤖 Test
- [ ] 💻 CI/Deployment
- [ ] 🧹 Tech Debt/Cleanup
## Issue(s)
<!-- Can reference multiple issues. Use one of the following "magic words" - "closes, fixes" to auto-close the Github issue. -->
* #1702
## Test Plan
<!-- How will this be tested prior to merging.-->
- [x] 💪 Manual
- [ ] ⚡ Unit test
- [ ] 💚 E2E
## Description
Under some circumstances items can be returned multiple times when iterating through the endpoint. This dedupes items within a single collection.
It does not address the following:
* item being removed from the collection during iteration
* item appearing multiple times in different collections
## Does this PR need a docs update or release note?
- [ ] ✅ Yes, it's included
- [ ] 🕐 Yes, but in a later PR
- [x] ⛔ No
## Type of change
- [ ] 🌻 Feature
- [x] 🐛 Bugfix
- [ ] 🗺️ Documentation
- [ ] 🤖 Test
- [ ] 💻 CI/Deployment
- [x] 🧹 Tech Debt/Cleanup
## Issue(s)
* #1954
## Test Plan
- [ ] 💪 Manual
- [x] ⚡ Unit test
- [ ] 💚 E2E
## Description
Adds a new func to the data.Collection iface:
DoNotMergeItems() tells kopia to skip the
retention of items from prior snapshots when
generating the new snapshot for this collection.
A primary use case for this flag is when a delta
token expires, preventing an incremental lookup
and forcing gc to re-discover all items in the
container.
## Does this PR need a docs update or release note?
- [x] ⛔ No
## Type of change
- [x] 🌻 Feature
## Issue(s)
* #1914
## Test Plan
- [x] ⚡ Unit test
## Description
If a delta token expires or is otherwise invalid,
the backup should fall back to the same
behavior as if the collection were new. It must
collect the full delta of information, and create
a collection with New state. If the previous
container moved between the two deltas,
it should be marked for deletion with a tomb-
stone collection.
## Does this PR need a docs update or release note?
- [x] ⛔ No
## Type of change
- [x] 🌻 Feature
## Issue(s)
* #1780
## Test Plan
- [x] 💪 Manual
- [x] ⚡ Unit test
- [x] 💚 E2E
## Description
Later code in details package will need access to this function so it can update item details. However, importing the onedrive package in the details package leads to an import cycle. Moving it to the path package breaks the cycle while still allowing the code to be accessed.
## Does this PR need a docs update or release note?
- [ ] ✅ Yes, it's included
- [ ] 🕐 Yes, but in a later PR
- [x] ⛔ No
## Type of change
<!--- Please check the type of change your PR introduces: --->
- [ ] 🌻 Feature
- [ ] 🐛 Bugfix
- [ ] 🗺️ Documentation
- [ ] 🤖 Test
- [ ] 💻 CI/Deployment
- [x] 🐹 Trivial/Minor
## Issue(s)
* #1800
## Test Plan
<!-- How will this be tested prior to merging.-->
- [ ] 💪 Manual
- [x] ⚡ Unit test
- [ ] 💚 E2E
## Description
This brings in another huge improvement in the backup speed dropping from ~15m to ~1.5m for an account with ~5000 files. This one also drastically reduces the number of requests we have to make for the same account from 5505 to just ~55. This also means that we don't get throttled anymore and we can easily run multiple backup jobs in parallel before we hit the 1024 limit.
~The code works as of now, but I have to double check the numbers as well as fix an issue with us opening too many files and causing program to crash with 'too many open files' when we bump up the numbers. With the current numbers, it works, but I wanna double check and optimize them. Plus some code cleanup.~
## Does this PR need a docs update or release note?
- [x] ✅ Yes, it's included
- [ ] 🕐 Yes, but in a later PR
- [ ] ⛔ No
## Type of change
<!--- Please check the type of change your PR introduces: --->
- [ ] 🌻 Feature
- [ ] 🐛 Bugfix
- [ ] 🗺️ Documentation
- [ ] 🤖 Test
- [ ] 💻 CI/Deployment
- [x] 🐹 Trivial/Minor
## Issue(s)
<!-- Can reference multiple issues. Use one of the following "magic words" - "closes, fixes" to auto-close the Github issue. -->
* #<issue>
## Test Plan
<!-- How will this be tested prior to merging.-->
- [ ] 💪 Manual
- [x] ⚡ Unit test
- [x] 💚 E2E
## Description
Fulfills the data collection interfaces to provide item deleted
flags and collection state enums.
A note on the additional data consumption: I'm making an assumption
that the @removed property will appear in that map. This is awaiting
testing for verification, which I'll get into as a follow-up step.
## Does this PR need a docs update or release note?
- [x] ⛔ No
## Type of change
- [x] 🌻 Feature
## Issue(s)
* #1727
## Test Plan
- [x] ❓ In a follow-up PR
## Description
Configuration and attenion to the graphService
failFast is haphazard and has shared ownership.
This change removes that property from the
service, along with the ErrPolicy func, in favor of passing around a control.Options struct.
## Type of change
- [x] 🐹 Trivial/Minor
## Issue(s)
* #1725
* #302
## Test Plan
- [x] ⚡ Unit test
## Description
`graph.Service -> graph.Servicer`, no other changes.
More compliant with golang naming standards,
and will allow us to eventually migrate the
Service struct out of connector and into graph.
## Type of change
- [x] 🐹 Trivial/Minor
## Issue(s)
* #1725
## Description
We have a retry handler within the client added as a middleware. So, don't retry here again if we still get any other err.
## Type of change
<!--- Please check the type of change your PR introduces: --->
- [ ] 🌻 Feature
- [x] 🐛 Bugfix
- [ ] 🗺️ Documentation
- [ ] 🤖 Test
- [ ] 💻 CI/Deployment
- [ ] 🐹 Trivial/Minor
## Issue(s)
<!-- Can reference multiple issues. Use one of the following "magic words" - "closes, fixes" to auto-close the Github issue. -->
* fixes https://github.com/alcionai/corso/issues/1684
## Test Plan
<!-- How will this be tested prior to merging.-->
- [x] 💪 Manual
- [ ] ⚡ Unit test
- [x] 💚 E2E
## Description
Currently, all drive backup and restore actions
populatet a details.OneDriveInfo struct. This
change branches that struct between one-
drive and sharepoint info, depending on the
current source.
## Type of change
- [x] 🌻 Feature
## Issue(s)
* #1616
## Test Plan
- [x] ⚡ Unit test
## Description
Do not implement the StreamModTime interface so kopia-assisted incrementals cannot be used for OneDrive.
Feature should be reenabled when the following are resolved:
* #1702
## Type of change
- [ ] 🌻 Feature
- [x] 🐛 Bugfix
- [ ] 🗺️ Documentation
- [ ] 🤖 Test
- [ ] 💻 CI/Deployment
- [x] 🐹 Trivial/Minor
## Issue(s)
* closes#1723
## Test Plan
- [x] 💪 Manual
- [ ] ⚡ Unit test
- [ ] 💚 E2E
## Description
Add functions to Collection and Stream interfaces that allow for getting
more information about the difference between the previous backup and
the currently in-progress one. These will allow delta token-based
incremental backups to determine how the state has evolved.
Current code does not use these functions and return values for them
are "default" values that should result in full backups even if
KopiaWrapper is updated to start checking the values and GraphConnector
still pulls all items
These functions are not used during restore and can return "default"
values
## Type of change
<!--- Please check the type of change your PR introduces: --->
- [x] 🌻 Feature
- [ ] 🐛 Bugfix
- [ ] 🗺️ Documentation
- [ ] 🤖 Test
- [ ] 💻 CI/Deployment
- [ ] 🐹 Trivial/Minor
## Issue(s)
* #1700
## Test Plan
<!-- How will this be tested prior to merging.-->
- [ ] 💪 Manual
- [x] ⚡ Unit test
- [ ] 💚 E2E
## Description
Add ModTime to Exchange, OneDrive and SharePoint list stream items. This enables kopia-assisted incrementals for those items. Backup details still contains a complete set of information for all items in the backup regardless of if kopia uploaded data for the item or not.
Kopia-assisted incrementals does come with some caveats though. If changes are made to an item in M365 and that change does not cause the modified time reported by M365 to update, then the change will not be backed up. Currently, only marking an email as read/unread is known to hit this edge case.
This patch does not lazily fetch data from Graph API. This means that kopia may upload less data, but the same amount of data will still be pulled from Graph
## Type of change
- [x] 🌻 Feature
- [ ] 🐛 Bugfix
- [ ] 🗺️ Documentation
- [ ] 🤖 Test
- [ ] 💻 CI/Deployment
- [ ] 🐹 Trivial/Minor
## Issue(s)
* closes#622
## Test Plan
<!-- How will this be tested prior to merging.-->
- [x] 💪 Manual
- [x] ⚡ Unit test
- [ ] 💚 E2E
## Description
This improves the initial backup speed for OneDrive. OneDrive backup was mostly slow when we had a lot of tiny files. Case where we had mostly large files was pretty much the best case scenario and we were throttled by purely how fast we can get the files from MS and how fast kopia can process and upload it. But of small files, we were slowed by the loop which was taking quite a bit of time to fetch the download urls. We have now parallelized the query for getting the download URL. Under best case scenarios, I was able to speed it up to under 20s from ~4-5m starting point. That said, MS graph api still seems to throttle us and when that happen we still go back to around ~2m for worst case scenario. I've added 3 retries as some requests were failing when we continuously making many requests.
This should also take care of the issue of url expiring mentioned in https://github.com/alcionai/corso/issues/581 as we are only prefetching a few urls ahed of time.
## Type of change
<!--- Please check the type of change your PR introduces: --->
- [ ] 🌻 Feature
- [ ] 🐛 Bugfix
- [ ] 🗺️ Documentation
- [ ] 🤖 Test
- [ ] 💻 CI/Deployment
- [ ] 🐹 Trivial/Minor
## Issue(s)
<!-- Can reference multiple issues. Use one of the following "magic words" - "closes, fixes" to auto-close the Github issue. -->
* https://github.com/alcionai/corso/issues/1595
* fixes https://github.com/alcionai/corso/issues/581
## Test Plan
<!-- How will this be tested prior to merging.-->
- [ ] 💪 Manual
- [x] ⚡ Unit test
- [x] 💚 E2E
## Description
Extends the site drive query to include a selector
for 'system' properties, which causes graph to
return an expanded set of drives including many
which are not normally visible to end users.
## Type of change
- [x] 🌻 Feature
## Issue(s)
* #1506
## Test Plan
- [x] 💚 E2E
## Description
This adds the following functionality to the CLI progress output:
- Add a message e.g. to describe a completed stage
- Add a message describing a stage in progress (using a spinner) and trigger completion
- Add a progress tracker (as a counter) for a specified number of items (when the # items is known up front
- Improves `ItemProgress` by aligning the columns a bit better and also removing the progress bar and replacing
with a byte counter
Finally - the above are used in the `backup` and `backup create onedrive` flows. Follow up PRs will wire these up for
exchange and the restore flows also.
## Type of change
<!--- Please check the type of change your PR introduces: --->
- [x] 🌻 Feature
- [ ] 🐛 Bugfix
- [ ] 🗺️ Documentation
- [ ] 🤖 Test
- [ ] 💻 CI/Deployment
- [ ] 🐹 Trivial/Minor
## Issue(s)
<!-- Can reference multiple issues. Use one of the following "magic words" - "closes, fixes" to auto-close the Github issue. -->
* #1278
## Test Plan
<!-- How will this be tested prior to merging.-->
- [ ] 💪 Manual
- [x] ⚡ Unit test
- [ ] 💚 E2E
## Description
Extends progress bar display with multi-line support.
## Type of change
- [x] 🌻 Feature
## Issue(s)
* #1112
## Test Plan
- [x] 💪 Manual
- [x] ⚡ Unit test
## Description
Adds a new package- Observe- for owning user-
oriented displays like progress bars. This PR adds
an initial progress bar to onedrive backups as a
proof-of-concept. The API is more important than
the specific progress bar package at this time.
Future changes may opt for a different pkg.
Display format currently looks like:
```
59% [=============> ] (6.9/12 kB, 14 MB/s) | Item_Name.txt
```
Known Issues:
* the `progressbar` package does not support multiline output, and [the author is not planning to add support](https://github.com/schollz/progressbar/issues/6). This causes concurrent items to overwrite each other. We will either need to fork the library, or change to a different one.
## Type of change
- [x] 🌻 Feature
## Issue(s)
* #1112
## Test Plan
- [x] 💪 Manual
- [x] ⚡ Unit test
## Description
Feature to show the number of bytes and additional details for backup / restore progress from GraphConncector
Full description [here](https://www.notion.so/alcion/Corso-Testing-Notes-b9867ac719d8459d8b46dbc7b07b33e0#da3869278b434b0398e7d245554b609b)
<!-- Insert PR description-->
## Type of change
<!--- Please check the type of change your PR introduces: --->
- [x] 🌻 Feature
## Issue(s)
<!-- Can reference multiple issues. Use one of the following "magic words" - "closes, fixes" to auto-close the Github issue. -->
* closes #559<issue>
## Test Plan
<!-- How will this be tested prior to merging.-->
- [x] 💪 Manual
Feature Output changed to:
```go
graph_connector_test.go:137: Action: Backup performed on 5 of 5 objects (227.63KB) within 1 directories. Downloaded from Inbox
graph_connector_test.go:137: Action: Backup performed on 8 of 8 objects (231.28KB) within 2 directories. Downloaded from Inbox, Contacts
graph_connector_test.go:137: Action: Backup performed on 23 of 23 objects (309.36KB) within 3 directories. Downloaded from Inbox, Contacts, Calendar
```
## Description
Adds the following selectors to OneDrive details/restore :
- `file-name`, `folder`, `file-created-after`, `file-created-before`, `file-modified-after`, `file-modified-before`
Also includes a change where we remove the `drive/<driveID>/root:` prefix from parent path entries in details. This
is to improve readability. We will add drive back as a separate item in details if needed later.
## Type of change
<!--- Please check the type of change your PR introduces: --->
- [x] 🌻 Feature
- [ ] 🐛 Bugfix
- [ ] 🗺️ Documentation
- [ ] 🤖 Test
- [ ] 💻 CI/Deployment
- [ ] 🐹 Trivial/Minor
## Issue(s)
* #627
## Test Plan
<!-- How will this be tested prior to merging.-->
- [ ] 💪 Manual
- [x] ⚡ Unit test
- [ ] 💚 E2E
## Description
Adds file size and timestamps to enable selectors using this metadata
## Type of change
- [x] 🌻 Feature
- [ ] 🐛 Bugfix
- [ ] 🗺️ Documentation
- [ ] 🤖 Test
- [ ] 💻 CI/Deployment
- [ ] 🐹 Trivial/Minor
## Issue(s)
* #594
## Test Plan
<!-- How will this be tested prior to merging.-->
- [ ] 💪 Manual
- [x] ⚡ Unit test
- [x] 💚 E2E
## Description
Moves the `path` package to the `pkg` package so other code outside of Corso can use it if they need it
## Type of change
<!--- Please check the type of change your PR introduces: --->
- [ ] 🌻 Feature
- [ ] 🐛 Bugfix
- [ ] 🗺️ Documentation
- [ ] 🤖 Test
- [ ] 💻 CI/Deployment
- [x] 🐹 Trivial/Minor
## Issue(s)
* closes#908
## Test Plan
<!-- How will this be tested prior to merging.-->
- [ ] 💪 Manual
- [x] ⚡ Unit test
- [x] 💚 E2E
## Description
Finish enabling wsl linter on Corso repo by removing the exception for onedrive package
## Type of change
<!--- Please check the type of change your PR introduces: --->
- [ ] 🌻 Feature
- [ ] 🐛 Bugfix
- [ ] 🗺️ Documentation
- [ ] 🤖 Test
- [ ] 💻 CI/Deployment
- [x] 🐹 Trivial/Minor
## Issue(s)
* closes#481
## Test Plan
<!-- How will this be tested prior to merging.-->
- [ ] 💪 Manual
- [ ] ⚡ Unit test
- [ ] 💚 E2E
## Description
Implements `data.Collection` restore for OneDrive and wires it up to the restore operation
Leverages previously added `onedrive` helpers that have integration tests.
An op/CLI level integration test will be added in a follow-up PR.
## Type of change
<!--- Please check the type of change your PR introduces: --->
- [x] 🌻 Feature
- [ ] 🐛 Bugfix
- [ ] 🗺️ Documentation
- [ ] 🤖 Test
- [ ] 💻 CI/Deployment
- [ ] 🐹 Trivial/Minor
## Issue(s)
* #668
## Test Plan
<!-- How will this be tested prior to merging.-->
- [x] 💪 Manual
- [x] ⚡ Unit test
- [ ] 💚 E2E
```
$ ./corso restore onedrive --backup 7de4d68f-67c2-4e81-95a2-81cf0d1bd11d
Restored OneDrive in S3 for user
```
* Make data.Collection.FullPath return path.Path
* Fixup graph connector code for path struct
* Add Elements call to Path interface
Still should not be used in place of the named functions to get specific
path elements (e.x. ResourceOwner())
* Fixup kopia wrapper path handling
Mostly removing shim code and minor fixup for building the directory
tree.
* All the test fixes
* Constants for OneDrive stuff
* Tests and constructor for OneDrive paths
* Populate onedrive path struct in data collection (#835)
* Helper function to make path structs for onedrive
* Use path struct in onedrive data collection
Does not change the external API at all, just the internals of how
FullPath functions and what is stored for the path.
* Wire up making data collections with path struct
Requires addition of tenant as input to Collections().
* Fixup onedrive Collections tests
* Wire up call to onedrive.NewCollections()
Just requires adding the tenant ID to the call.
## Description
Wires up the OneDrive collection logic to `operation.Backup`
Includes an integration test that runs against the test domain
Two bug fixes:
- Skip the "root" item that is returned by the delta query
- Fix incorrect usage of the `filepath.SplitList` function which does
not split a path into components. Instead use `strings.Split`. This
is ok because the paths returned here are not OS specific.
Regardless - this logic will be refactored when we use the `path` pkg.
## Type of change
Please check the type of change your PR introduces:
- [x] 🌻 Feature
- [x] 🐛 Bugfix
- [ ] 🗺️ Documentation
- [ ] 🤖 Test
- [ ] 🐹 Trivial/Minor
## Issue(s)
#548
## Test Plan
<!-- How will this be tested prior to merging.-->
- [ ] 💪 Manual
- [x] ⚡ Unit test
- [x] 💚 E2E
swaps the corso go module from github.com/
alcionai/corso to github.com/alcionai/corso/src
to align with the location of the go.mod and
go.sum files inside the repo.
All other changes in the repository update the
package imports to the new module path.
## Description
This builds on the `MergeStatus` proposal and `WaitGroup` discussion proposed in #494
Required for OneDrive where we are operating on multiple collections
## Type of change
Please check the type of change your PR introduces:
- [x] 🌻 Feature
- [ ] 🐛 Bugfix
- [ ] 🗺️ Documentation
- [ ] 🤖 Test
- [ ] 🐹 Trivial/Minor
## Issue(s)
#494
## Test Plan
<!-- How will this be tested prior to merging.-->
- [ ] 💪 Manual
- [x] ⚡ Unit test
- [ ] 💚 E2E
## Description
Since a backup can encapsulate multiple data
types (ex: mail, contacts, and events), it doesn't
make sense to print one table with all disjoint
values. This change splits up the print output
so that each data type in the details gets its
own table.
## Type of change
Please check the type of change your PR introduces:
- [x] 🌻 Feature
## Issue(s)
#501
## Test Plan
- [ ] 💪 Manual
- [x] ⚡ Unit test
- [ ] 💚 E2E
## Description
This contains the following changes:
- Support functions to enumerate a users drives and the files within it.
- `driveItemReader` method that that reads a drive item
- Integration tests for the above
## Type of change
Please check the type of change your PR introduces:
- [x] 🌻 Feature
- [ ] 🐛 Bugfix
- [ ] 🗺️ Documentation
- [ ] 🤖 Test
- [ ] 🐹 Trivial/Minor
## Issue(s)
- #388
## Test Plan
<!-- How will this be tested prior to merging.-->
- [ ] 💪 Manual
- [x] ⚡ Unit test
- [ ] 💚 E2E
## Description
Introduces a OneDrive data collection.
Follow up PRs will implement the `collection.driveItemReader()` method that uses the Graph API
## Type of change
Please check the type of change your PR introduces:
- [x] 🌻 Feature
- [ ] 🐛 Bugfix
- [ ] 🗺️ Documentation
- [ ] 🤖 Test
- [ ] 🐹 Trivial/Minor
## Issue(s)
- #387
## Test Plan
<!-- How will this be tested prior to merging.-->
- [ ] 💪 Manual
- [x] ⚡ Unit test
- [ ] 💚 E2E