A blog on data management and DataLad

A world map with DataLad minions and connected nodes

Collaborative infrastructure for a lab: Forgejo

For the past 18 years I have been a GitHub user. It has been an extremely convenient platform for collaborating with many people from all over the world. What makes GitHub, and other platforms like it, particularly attractive is that they are typically way more accessible than any institutionally provided infrastructure (even if not without issues of its own). GitHub also provided an extremely reliable and stable infrastructure that encouraged and rewarded building on it....

2025-03-04 · 13 min · 2606 words · Michael Hanke
A screenshot of DataLad-Registry web UI

DataLad-Registry: Bringing Benefits of Centrality to DataLad

DataLad provides a platform for managing and uniformly accessing data resources. It also captures basic provenance information about data results within Git repository commits. However, discovering DataLad datasets or Git repositories that DataLad has operated on can be challenging. They can be shared anywhere online, including popular Git hosting platforms, such as GitHub, generic file hosting platforms such as OSF, neuroscience platforms, such as GIN, or they can even be available only within the internal network of an organization or just one particular server....

2024-12-06 · 9 min · 1706 words · Isaac To, Austin Macdonald, Yaroslav O Halchenko
A screenshot of https://hub.datalad.org/hcp-openaccess, and the Forgejo, git-annex, and DataLad logos on top.

Hosting really large datasets with Forgejo-aneksajo

One scenario where DataLad shines is managing datasets that are larger than what a single Git repository can deal with. The combination of git-annex’s capabilities to separate Git hosting from data hosting in extremely flexible ways with DataLad’s approach to orchestrating collections of nested repositories as a joint “mono repo” is the foundation for that. One example of such a large dataset is the WU-Minn HCP1200 Data, a collection of brain imaging data, acquired from more than a thousand individual participants by the Human Connectome Project (HCP)....

2024-08-27 · 7 min · 1296 words · Michael Hanke
Screenshot of a video page of the dataset described in this post as hosted at https://hub.datalad.org/distribits/recordings, and the FFmpeg, HTCondor, git-annex, and DataLad logos on top.

Fairly big video workflow

Two years ago, my colleagues published FAIRly big: A framework for computationally reproducible processing of large-scale data. In this paper, they describe how to partition a large analysis (their example: processing anatomical images of 42 thousand subjects from UK Biobank), using DataLad to provision data and capture provenance, so that individual results can be reproduced on a laptop, even though a cluster is needed to run the entire group analysis. The article is accompanied by a workflow template and a tutorial dataset....

2024-08-16 · 20 min · 4076 words · Michał Szczepanik

Collecting runtime statistics and outputs with `con-duct` and `datalad-run`

One of the challenges that I’ve experienced when attempting to replicate the execution of data analysis is quite simply that information regarding the required resources is sparse. For example, when submitting a SLURM job, how does one know the wallclock time to request, much less memory and CPU resources? To solve this problem we at the Center for Open Neuroscience have created a new tool, con-duct aka duct to easily collect this information....

2024-08-09 · 3 min · 547 words · Austin Macdonald