As you set up your Data@UNIMI account and begin creating dataverses and datasets in your workspace, remember that the support staff is available for guidance and assistance.
You can also consult theresearch data management checklist

Well-structured and well-described dataverses and datasets are key for a FAIR use of research data, as this allows other researchers to more easily access, understand and reuse your research data.
When structuring your workspace remember that inside your own dataverse you can create one or more dataverses and create as many datasets as necessary.
Dataverses are like sub-folders and they contain datasets, whereas datasets contain files are defined with a unique DOI. Focus on the type of data which want to upload and share, because as this will influence the structure of your dataverse:
Data related to publications
- If you are uploading research data associated to a scientific publication, your dataset should be named Replication data for: “Title of the publication” . You should fill in the metadata filed “Related publication” of your dataset, with the full citation of your scientific publication and its DOI; whilst, if the data you are uploading are related to a scientific publication still under review or submission, you can keep your dataset unpublished (and eventually allow reviewers to access the dataset by using the “private URL” option): as soon as the related scientific publication is published, follow the instructions to the previous point to publish and share your dataset, and hence to make it public and open to everyone. For both cases we strongly recommend reading carefully the guideline onHow to share data related to a publication.
- If you are uploading data related to a published pre-print, follow the instructions set out above in the previous point. If, afterwards, the pre-print is submitted to a scientific journal, you can add the citation information of the published article, and link it to your pre-print, in the ‘related publication’ field of the dataset. If during the review of the article you are asked to apply significant changes to your data, consider creating a new dataset with the reviewed data and linking it to the dataset of the pre-print (with the old, unreviewed data). For further details, follow again the guidelines linked above.
Data with a complex structure
If you need to give your research data a complex structure with many files and different folders and sub-folders, you can upload a zip folder in your dataset since Dataverse will respect and show the original tree structure; alternatively, you can create inside your main dataverse a new dataverse with many datasets, but make sure to mantain naming uniformity (e.g. “Replication data for experiment A in ‘Publication title'” + “Replication data for experiment B in ‘Publication title'”, and so on). Importantly, always pay attention in describing the relationships between your sub-folders and/or sub-dataverses (in a README file, in both cases, and in the metadata ‘Related dataset’ in the second case).
Data related to projects
If you are uploading data produced in the framework of a research project, whether it is a collaborative project with many partners or the project of a single researcher, make sure to structure your main dataverse in sub-dataverses which are all consistently named, organized and with explanation of the division and relations between the different sub-dataverses (e.g. a sub-dataverse per each different partner, per each work package of the project, per each chapter of a PhD project, and so on). Importantly, if you have written a DMP (and you should!), make sure that you are following the same data organization and data publication of your project’s DMP. Raw data, working data, deliverables and text documents should be preserved and shared with the members of the project via Drive or other institutional servers. If the project’s development leads to one or more publishable and publicly sharable dataset(s), you can use Data@UNIMI: be sure to consistently organize and structure by following the guideline on How to structure a dataverse for your project.
Raw data
If you are planning to upload raw data, think about it carefully. Remember that repositories like Data@UNIMI are devoted to publishing and sharing data that have been processed and/or represent the final stages of a research process and are thus ready for interoperability and reuse. Preferably, raw data should be preserved and shared with your research team via other platforms, as indicated above: make sure that you select relevant data and keep uncomplete data elsewhere. If you consider some of your work data relevant enough to be upload to Data@UNIMI, make sure that your dataset is named Work data for “name of the experiment”/”title of the publication”. For work data and processed data, it is key to describe the data exhaustively and document fully its provenance, along with all the processes, softwares, codes, and so on, used to obtain it, by filling as many fields of metadata as possible. Remember that Data@UNIMI is a FAIR repository, and data should be FAIR!
Once you have conceived the necessary structure you can start creating a dataset, filling in metadata fields, taking care of data management, uploading files, setting terms of access and reuse, and, finally, submitting your data for publication:

As you create a dataset, remember that Data@UNIMI will allow you to save it as a draft dataset, which can be edited or deleted depending on your necessities. Upon creation, a draft dataset is automatically assigned a valid DOI. Though valid, this DOI is not activated until your dataset is published by Data@UNIMI support staff after the review process.
Upon the creation of your dataset, you are required to compile some mandatory metadata: we strongly suggest that you fill in required fields, then, save your dataset and, finally, edit the metadata section by compiling as many fields as possible. Importantly, in the description field of each dataset give useful information for understanding the context in which data were produced: focus on nature, purpose and scope of the dataset, and make it clear how data were created and eventually elaborated, providing all details to allow replication and reuse. Remember that the more you describe your data the easier it will be to find, consult and replicate your data: relevantly, ensure that you comply with the entirety of the metadata reported in the following template, but feel free to deal also with the “domain-specific metadata”, which is suited to different disciplinary areas. Indeed, do take note that metadata will always be open and accessible, even if the data of your datasets has restricted access.
After compiling metadata, you can begin uploading your data files. As you do so, please pay attention to the following:
- file formats, preferring open ones;
- the license, preferring open ones if possible and remembering to provide details in the ‘Terms’ tab of your datasets on how to obtain files if you have applied restricted access to your data;
- provide a file which lists the data files and, importantly, a README file. The presence of the README file is mandatory, as this ensures that your data is described and documented in the best possible way, enhancing data quality as well as reuse and replication by other users.
- To help you along this technical process, you can consult the Data@UNIMI user guide, both in italian and in english.
Finally, once you have defined the structure of your dataverse, created your dataset(s), exhaustively filled in metadata sections, uploaded data files and documentation, defined license and terms of access to your dataset(s) and files, and checked the quality of your data, you are ready to submit your dataset(s) for publication.
Just click on ‘Submit for review’ button in the left menu of your dataset, so that Data@UNIMI support staff will automatically receive a notification. We will revise your submitted dataset and check its quality: Has the metadata been fully and exhaustively compiled? Have the data files been deposited in an open format ingested by the system? Have the data files been named consistently with their content? Does a README file and/or other documentation relevant for interoperability and reuse accompany the data files? Do the chosen license and the terms of access to the dataset correspond? Does the dataset comply with eventual legal and ethical issues? Once we have verified all these aspects, we will publish your dataset or return it back to you, notifying improvements needed for the dataset to be published and to be FAIR compliant.

Importantly, after the publication you will always be able to edit the data, metadata and terms of your dataset(s): indeed, Data@UNIMI provides a versioning system which allows to efficiently track all the changes made, also to single files. Relevantly, if you edit your published dataset, an updated version of it is automatically created: according to the kind of changes applied, new versions of your dataset can consist in a minor version (e.g. from version 1.0 to version 1.1, if, for instance, small changes were made to metadata) or in a major version (e.g. from version 1.1 to version 2.0, if, for instance, a file is added or replaced). Data@UNIMI will keep track of the history of all the changes and of the new versions, since all the published versions will remain anyway publicily available in the ‘Versions’ tab of your dataset(s). Importantly, once any changes have been made to a published dataset, it is necessary to resubmit the dataset using the “Submit for review” button to publish the most up-to-date version of the dataset.
And voilà, the work is done: your data has been FAIRly managed and it is re-usable (according to its license) and available on a certified trustworthy repository. Importantly, do not forget to check UNIMI’s Dataverse workflow and steps for creating and submitting a dataset for publication and the ‘Prepare your data for publication‘ checklist which can guide you each time that you create a dataset.
