* Move interfaces to common location Upcoming PRs are using these interfaces across packages. Move them to a common location so multiple packages can use them without import cycles etc. * Allow adding newly created folders to the cache (#1131) * New function to add folders to cache Allow adding new folders to the cache. Automatically cache the paths for the new folders. Also add the new function to the interface. * Reuse the AddToCache function during population * Wire up ability to back up a single subfolder of mail (#1132) * Expand cache to return items in it Required to allow matching an item's path to a selector as the selector will not provide which paths it matches on easily. * Function to get collections from cached folders Returned collections match any matchers given for the folders * Thread resolver through iterator functions Allow the folder resolver to be used in all iterator functions. The resolver will be tied to the current category and user. * Choose between using resolver and making queries Allow either using the resolver to get folders with matching names or using queries to get them. * Wire up resolver at entry point Create a resolver instance for each user/category of data being backedup. * Preparation for changing how mail enumeration is done (#1157) * Step towards redoing mail fetching Pull out old way to get data into a new function and setup some helper functions etc. * Switch to pulling mail items folder by folder (#1158) * Function to pull mail items given collections Given a set of collections and IDs for those collections pull the mail items for each collection. * Create helper function to fetch mail New helper function to fetch mail items. This goes through each folder and gets the items for them individually. * Wire up new way to fetch mail Leaves fetch logic for other data types undisturbed. * Tests for new mail fetching logic Remove tests that were previously in iterators_test.go and move them to graph_connector_test.go. These tests only had to do with mail logic. Tests that handled all data types in iterators_test.go have been updated to skip mail now.
Corso
Corso is the first open-source tool that aims to assist IT admins with the critical task of protecting their Microsoft 365 data. It provides a reliable, secure, and efficient data protection engine. Admins decide where to store the backup data and have the flexibility to perform backups of their desired service through an intuitive interface. As Corso evolves, it can become a great building block for more complex data protection workflows.
Corso is currently in ALPHA and should NOT be used in production.
Corso supports M365 Exchange and OneDrive with SharePoint and Teams support in active development. Coverage for more services, possibly beyond M365, will expand based on the interest and needs of the community.
Getting Started
See the Corso Documentation for more information.
Corso container images
Corso container images are convenienty hosted on ghrc.io.
For a sepcific release, use the following command:
docker pull ghcr.io/alcionai/corso:<release tag>
Building Corso
To learn more about working with the project source core and building Corso, see the Developer section of the Corso Documentation.
Contribution Guidelines
Code of Conduct
It's important that our community is inclusive and respectful of everyone. We ask that all Corso users and contributors take a few minutes to review our Code of Conduct.
License
Corso is licensed under the Apache License, Version 2.0. See LICENSE for the full license text.