Matches in Nanopublications for { ?s <http://schema.org/description> ?o ?g. }
- 06cb618c-d858-4b50-88e3-7b737e4d193f description "The project allowed us to manage and build structured code scripts on the Jupyter Notebook, a simple web application which is user-friendly, flexible to use in the research community. The script is developed to address the specific needs of research between different platforms of dataset. These stakeholders have developed their own platforms for the annotation and standardisation of both data and metadata produced within their respective field. -The INFRAFRONTIER - European Mutant Mouse Archive (EMMA) comprises over 7200 mutant mouse lines that are extensively integrated and enriched with other public dataset. -The EU-OpenScreen offers compound screening protocols containing several metadata and will contribute to the development of tools for linking to the chemical entity database. -The IDR Image Data Resource is a public repository of reference image datasets from published scientific studies, where the community can submit, search and access high-quality bio-image data. -The CIM-XNAT is an XNAT deployment of the Molecular Imaging Center at UniTo that offers a suite of tools for uploading preclinical images. To address the challenges of integrating several EU-RI datasets with focus on preclinical and discovery research bioimaging, our aim is to develop cross researching queries through a web based interface to combine the resources of the RIs for integrating the information associated with data belonging to the involved RIs. Furthermore, the open-source tool provides users with free, open access to collections of datasets distributed over multiple sources that result from searches by specific keywords. The script allows the cross research in different fields of research as: Species, Strain, Gene, Cell line, Disease model, Chemical Compound. The novel aspects of this tool are mainly: a) user friendly, e.g. the user has the flexibility to research among the dataset easily with a simple API, intuitive for researchers and biomedical users. b) the possibility of making a research between different platforms and repositories, from a unique simple way. c) the workflow project follows the FAIR principles in the treatment of data and datasets. The access to Notebook Jupyter needs the installation of Anaconda, which consents to open the web application. Inside the Jupyter, the script was built using Python. The query code is also easy to download and share in a .ipynb file. A visual representation of the detailed results (dataset, metadata, information, query results) of the workflow can be printed immediately after the query run. " assertion.
- 7ab8a5dd-8bc6-46c1-8ca9-a0c245a896ad description "# Drug Synergies Screening Workflow ## Table of Contents - [Drug Synergies Screening Workflow](#drug-synergies-screening-workflow) - [Table of Contents](#table-of-contents) - [Description](#description) - [Contents](#contents) - [Building Blocks](#building-blocks) - [Workflows](#workflows) - [Resources](#resources) - [Tests](#tests) - [Instructions](#instructions) - [Local machine](#local-machine) - [Requirements](#requirements) - [Usage steps](#usage-steps) - [MareNostrum 4](#marenostrum-4) - [Requirements in MN4](#requirements-in-mn4) - [Usage steps in MN4](#usage-steps-in-mn4) - [License](#license) - [Contact](#contact) ## Description This pipeline simulates a drug screening on personalised cell line models. It automatically builds Boolean models of interest, then uses cell lines data (expression, mutations, copy number variations) to personalise them as MaBoSS models. Finally, this pipeline simulates multiple drug intervention on these MaBoSS models, and lists drug synergies of interest. The workflow uses the following building blocks, described in order of execution: 1. Build model from species 2. Personalise patient 3. MaBoSS 4. Print drug results For details on individual workflow steps, see the user documentation for each building block. [`GitHub repository`](https://github.com/PerMedCoE/drug-synergies-workflow>) ## Contents ### Building Blocks The ``BuildingBlocks`` folder contains the script to install the Building Blocks used in the Drug Synergies Workflow. ### Workflows The ``Workflow`` folder contains the workflows implementations. Currently contains the implementation using PyCOMPSs. ### Resources The ``Resources`` folder contains a small dataset for testing purposes. ### Tests The ``Tests`` folder contains the scripts that run each Building Block used in the workflow for a small dataset. They can be executed individually *without PyCOMPSs installed* for testing purposes. ## Instructions ### Local machine This section explains the requirements and usage for the Drug Synergies Workflow in a laptop or desktop computer. #### Requirements - [`permedcoe`](https://github.com/PerMedCoE/permedcoe) package - [PyCOMPSs](https://pycompss.readthedocs.io/en/stable/Sections/00_Quickstart.html) - [Singularity](https://sylabs.io/guides/3.0/user-guide/installation.html) #### Usage steps 1. Clone this repository: ```bash git clone https://github.com/PerMedCoE/drug-synergies-workflow.git ``` 2. Install the Building Blocks required for the COVID19 Workflow: ```bash drug-synergies-workflow/BuildingBlocks/./install_BBs.sh ``` 3. Get the required Building Block images from the project [B2DROP](https://b2drop.bsc.es/index.php/f/444350): - Required images: - PhysiCell-COVID19.singularity - printResults.singularity - MaBoSS_sensitivity.singularity - FromSpeciesToMaBoSSModel.singularity The path where these files are stored **MUST be exported in the `PERMEDCOE_IMAGES`** environment variable. > :warning: **TIP**: These containers can be built manually as follows (be patient since some of them may take some time): 1. Clone the `BuildingBlocks` repository ```bash git clone https://github.com/PerMedCoE/BuildingBlocks.git ``` 2. Build the required Building Block images ```bash cd BuildingBlocks/Resources/images sudo singularity build PhysiCell-COVID19.sif PhysiCell-COVID19.singularity sudo singularity build printResults.sif printResults.singularity sudo singularity build MaBoSS_sensitivity.sif MaBoSS_sensitivity.singularity sudo singularity build FromSpeciesToMaBoSSModel.sif FromSpeciesToMaBoSSModel.singularity cd ../../.. ``` **If using PyCOMPSs in local PC** (make sure that PyCOMPSs in installed): 4. Go to `Workflow/PyCOMPSs` folder ```bash cd Workflows/PyCOMPSs ``` 5. Execute `./run.sh` > **TIP**: If you want to run the workflow with a different dataset, please update the `run.sh` script setting the `dataset` variable to the new dataset folder and their file names. ### MareNostrum 4 This section explains the requirements and usage for the Drug Synergies Workflow in the MareNostrum 4 supercomputer. #### Requirements in MN4 - Access to MN4 All Building Blocks are already installed in MN4, and the Drug Synergies Workflow available. #### Usage steps in MN4 1. Load the `COMPSs`, `Singularity` and `permedcoe` modules ```bash export COMPSS_PYTHON_VERSION=3 module load COMPSs/3.1 module load singularity/3.5.2 module use /apps/modules/modulefiles/tools/COMPSs/libraries module load permedcoe ``` > **TIP**: Include the loading into your `${HOME}/.bashrc` file to load it automatically on the session start. This commands will load COMPSs and the permedcoe package which provides all necessary dependencies, as well as the path to the singularity container images (`PERMEDCOE_IMAGES` environment variable) and testing dataset (`DRUG_SYNERGIES_WORKFLOW_DATASET` environment variable). 2. Get a copy of the pilot workflow into your desired folder ```bash mkdir desired_folder cd desired_folder get_drug_synergies_workflow ``` 3. Go to `Workflow/PyCOMPSs` folder ```bash cd Workflow/PyCOMPSs ``` 4. Execute `./launch.sh` This command will launch a job into the job queuing system (SLURM) requesting 2 nodes (one node acting half master and half worker, and other full worker node) for 20 minutes, and is prepared to use the singularity images that are already deployed in MN4 (located into the `PERMEDCOE_IMAGES` environment variable). It uses the dataset located into `../../Resources/data` folder. > :warning: **TIP**: If you want to run the workflow with a different dataset, please edit the `launch.sh` script and define the appropriate dataset path. After the execution, a `results` folder will be available with with Drug Synergies Workflow results. ## License [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) ## Contact This software has been developed for the [PerMedCoE project](https://permedcoe.eu/), funded by the European Commission (EU H2020 [951773](https://cordis.europa.eu/project/id/951773)).  " assertion.
- 7ab8a5dd-8bc6-46c1-8ca9-a0c245a896ad description "# Drug Synergies Screening Workflow ## Table of Contents - [Drug Synergies Screening Workflow](#drug-synergies-screening-workflow) - [Table of Contents](#table-of-contents) - [Description](#description) - [Contents](#contents) - [Building Blocks](#building-blocks) - [Workflows](#workflows) - [Resources](#resources) - [Tests](#tests) - [Instructions](#instructions) - [Local machine](#local-machine) - [Requirements](#requirements) - [Usage steps](#usage-steps) - [MareNostrum 4](#marenostrum-4) - [Requirements in MN4](#requirements-in-mn4) - [Usage steps in MN4](#usage-steps-in-mn4) - [License](#license) - [Contact](#contact) ## Description This pipeline simulates a drug screening on personalised cell line models. It automatically builds Boolean models of interest, then uses cell lines data (expression, mutations, copy number variations) to personalise them as MaBoSS models. Finally, this pipeline simulates multiple drug intervention on these MaBoSS models, and lists drug synergies of interest. The workflow uses the following building blocks, described in order of execution: 1. Build model from species 2. Personalise patient 3. MaBoSS 4. Print drug results For details on individual workflow steps, see the user documentation for each building block. [`GitHub repository`](https://github.com/PerMedCoE/drug-synergies-workflow>) ## Contents ### Building Blocks The ``BuildingBlocks`` folder contains the script to install the Building Blocks used in the Drug Synergies Workflow. ### Workflows The ``Workflow`` folder contains the workflows implementations. Currently contains the implementation using PyCOMPSs. ### Resources The ``Resources`` folder contains a small dataset for testing purposes. ### Tests The ``Tests`` folder contains the scripts that run each Building Block used in the workflow for a small dataset. They can be executed individually *without PyCOMPSs installed* for testing purposes. ## Instructions ### Local machine This section explains the requirements and usage for the Drug Synergies Workflow in a laptop or desktop computer. #### Requirements - [`permedcoe`](https://github.com/PerMedCoE/permedcoe) package - [PyCOMPSs](https://pycompss.readthedocs.io/en/stable/Sections/00_Quickstart.html) - [Singularity](https://sylabs.io/guides/3.0/user-guide/installation.html) #### Usage steps 1. Clone this repository: ```bash git clone https://github.com/PerMedCoE/drug-synergies-workflow.git ``` 2. Install the Building Blocks required for the COVID19 Workflow: ```bash drug-synergies-workflow/BuildingBlocks/./install_BBs.sh ``` 3. Get the required Building Block images from the project [B2DROP](https://b2drop.bsc.es/index.php/f/444350): - Required images: - PhysiCell-COVID19.singularity - printResults.singularity - MaBoSS_sensitivity.singularity - FromSpeciesToMaBoSSModel.singularity The path where these files are stored **MUST be exported in the `PERMEDCOE_IMAGES`** environment variable. > :warning: **TIP**: These containers can be built manually as follows (be patient since some of them may take some time): 1. Clone the `BuildingBlocks` repository ```bash git clone https://github.com/PerMedCoE/BuildingBlocks.git ``` 2. Build the required Building Block images ```bash cd BuildingBlocks/Resources/images sudo singularity build PhysiCell-COVID19.sif PhysiCell-COVID19.singularity sudo singularity build printResults.sif printResults.singularity sudo singularity build MaBoSS_sensitivity.sif MaBoSS_sensitivity.singularity sudo singularity build FromSpeciesToMaBoSSModel.sif FromSpeciesToMaBoSSModel.singularity cd ../../.. ``` **If using PyCOMPSs in local PC** (make sure that PyCOMPSs in installed): 4. Go to `Workflow/PyCOMPSs` folder ```bash cd Workflows/PyCOMPSs ``` 5. Execute `./run.sh` > **TIP**: If you want to run the workflow with a different dataset, please update the `run.sh` script setting the `dataset` variable to the new dataset folder and their file names. ### MareNostrum 4 This section explains the requirements and usage for the Drug Synergies Workflow in the MareNostrum 4 supercomputer. #### Requirements in MN4 - Access to MN4 All Building Blocks are already installed in MN4, and the Drug Synergies Workflow available. #### Usage steps in MN4 1. Load the `COMPSs`, `Singularity` and `permedcoe` modules ```bash export COMPSS_PYTHON_VERSION=3 module load COMPSs/3.1 module load singularity/3.5.2 module use /apps/modules/modulefiles/tools/COMPSs/libraries module load permedcoe ``` > **TIP**: Include the loading into your `${HOME}/.bashrc` file to load it automatically on the session start. This commands will load COMPSs and the permedcoe package which provides all necessary dependencies, as well as the path to the singularity container images (`PERMEDCOE_IMAGES` environment variable) and testing dataset (`DRUG_SYNERGIES_WORKFLOW_DATASET` environment variable). 2. Get a copy of the pilot workflow into your desired folder ```bash mkdir desired_folder cd desired_folder get_drug_synergies_workflow ``` 3. Go to `Workflow/PyCOMPSs` folder ```bash cd Workflow/PyCOMPSs ``` 4. Execute `./launch.sh` This command will launch a job into the job queuing system (SLURM) requesting 2 nodes (one node acting half master and half worker, and other full worker node) for 20 minutes, and is prepared to use the singularity images that are already deployed in MN4 (located into the `PERMEDCOE_IMAGES` environment variable). It uses the dataset located into `../../Resources/data` folder. > :warning: **TIP**: If you want to run the workflow with a different dataset, please edit the `launch.sh` script and define the appropriate dataset path. After the execution, a `results` folder will be available with with Drug Synergies Workflow results. ## License [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) ## Contact This software has been developed for the [PerMedCoE project](https://permedcoe.eu/), funded by the European Commission (EU H2020 [951773](https://cordis.europa.eu/project/id/951773)). " assertion.
- d9f38986-7753-486d-aa09-bacf33643dbb description "# Drug Synergies Screening Workflow ## Table of Contents - [Drug Synergies Screening Workflow](#drug-synergies-screening-workflow) - [Table of Contents](#table-of-contents) - [Description](#description) - [Contents](#contents) - [Building Blocks](#building-blocks) - [Workflows](#workflows) - [Resources](#resources) - [Tests](#tests) - [Instructions](#instructions) - [Local machine](#local-machine) - [Requirements](#requirements) - [Usage steps](#usage-steps) - [MareNostrum 4](#marenostrum-4) - [Requirements in MN4](#requirements-in-mn4) - [Usage steps in MN4](#usage-steps-in-mn4) - [License](#license) - [Contact](#contact) ## Description This pipeline simulates a drug screening on personalised cell line models. It automatically builds Boolean models of interest, then uses cell lines data (expression, mutations, copy number variations) to personalise them as MaBoSS models. Finally, this pipeline simulates multiple drug intervention on these MaBoSS models, and lists drug synergies of interest. The workflow uses the following building blocks, described in order of execution: 1. Build model from species 2. Personalise patient 3. MaBoSS 4. Print drug results For details on individual workflow steps, see the user documentation for each building block. [`GitHub repository`](https://github.com/PerMedCoE/drug-synergies-workflow>) ## Contents ### Building Blocks The ``BuildingBlocks`` folder contains the script to install the Building Blocks used in the Drug Synergies Workflow. ### Workflows The ``Workflow`` folder contains the workflows implementations. Currently contains the implementation using PyCOMPSs. ### Resources The ``Resources`` folder contains a small dataset for testing purposes. ### Tests The ``Tests`` folder contains the scripts that run each Building Block used in the workflow for a small dataset. They can be executed individually *without PyCOMPSs installed* for testing purposes. ## Instructions ### Local machine This section explains the requirements and usage for the Drug Synergies Workflow in a laptop or desktop computer. #### Requirements - [`permedcoe`](https://github.com/PerMedCoE/permedcoe) package - [PyCOMPSs](https://pycompss.readthedocs.io/en/stable/Sections/00_Quickstart.html) - [Singularity](https://sylabs.io/guides/3.0/user-guide/installation.html) #### Usage steps 1. Clone this repository: ```bash git clone https://github.com/PerMedCoE/drug-synergies-workflow.git ``` 2. Install the Building Blocks required for the COVID19 Workflow: ```bash drug-synergies-workflow/BuildingBlocks/./install_BBs.sh ``` 3. Get the required Building Block images from the project [B2DROP](https://b2drop.bsc.es/index.php/f/444350): - Required images: - PhysiCell-COVID19.singularity - printResults.singularity - MaBoSS_sensitivity.singularity - FromSpeciesToMaBoSSModel.singularity The path where these files are stored **MUST be exported in the `PERMEDCOE_IMAGES`** environment variable. > :warning: **TIP**: These containers can be built manually as follows (be patient since some of them may take some time): 1. Clone the `BuildingBlocks` repository ```bash git clone https://github.com/PerMedCoE/BuildingBlocks.git ``` 2. Build the required Building Block images ```bash cd BuildingBlocks/Resources/images sudo singularity build PhysiCell-COVID19.sif PhysiCell-COVID19.singularity sudo singularity build printResults.sif printResults.singularity sudo singularity build MaBoSS_sensitivity.sif MaBoSS_sensitivity.singularity sudo singularity build FromSpeciesToMaBoSSModel.sif FromSpeciesToMaBoSSModel.singularity cd ../../.. ``` **If using PyCOMPSs in local PC** (make sure that PyCOMPSs in installed): 4. Go to `Workflow/PyCOMPSs` folder ```bash cd Workflows/PyCOMPSs ``` 5. Execute `./run.sh` > **TIP**: If you want to run the workflow with a different dataset, please update the `run.sh` script setting the `dataset` variable to the new dataset folder and their file names. ### MareNostrum 4 This section explains the requirements and usage for the Drug Synergies Workflow in the MareNostrum 4 supercomputer. #### Requirements in MN4 - Access to MN4 All Building Blocks are already installed in MN4, and the Drug Synergies Workflow available. #### Usage steps in MN4 1. Load the `COMPSs`, `Singularity` and `permedcoe` modules ```bash export COMPSS_PYTHON_VERSION=3 module load COMPSs/3.1 module load singularity/3.5.2 module use /apps/modules/modulefiles/tools/COMPSs/libraries module load permedcoe ``` > **TIP**: Include the loading into your `${HOME}/.bashrc` file to load it automatically on the session start. This commands will load COMPSs and the permedcoe package which provides all necessary dependencies, as well as the path to the singularity container images (`PERMEDCOE_IMAGES` environment variable) and testing dataset (`DRUG_SYNERGIES_WORKFLOW_DATASET` environment variable). 2. Get a copy of the pilot workflow into your desired folder ```bash mkdir desired_folder cd desired_folder get_drug_synergies_workflow ``` 3. Go to `Workflow/PyCOMPSs` folder ```bash cd Workflow/PyCOMPSs ``` 4. Execute `./launch.sh` This command will launch a job into the job queuing system (SLURM) requesting 2 nodes (one node acting half master and half worker, and other full worker node) for 20 minutes, and is prepared to use the singularity images that are already deployed in MN4 (located into the `PERMEDCOE_IMAGES` environment variable). It uses the dataset located into `../../Resources/data` folder. > :warning: **TIP**: If you want to run the workflow with a different dataset, please edit the `launch.sh` script and define the appropriate dataset path. After the execution, a `results` folder will be available with with Drug Synergies Workflow results. ## License [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) ## Contact This software has been developed for the [PerMedCoE project](https://permedcoe.eu/), funded by the European Commission (EU H2020 [951773](https://cordis.europa.eu/project/id/951773)). " assertion.
- 356b66d5-921f-41c8-97e3-ebe65ed97084 description "IDR is based on OMERO and thus all what we show in this notebook can be easily adjusted for use against another OMERO server, e.g. your institutional OMERO server instance. The main objective of this notebook is to demonstrate how public resources such as the IDR can be used to train your neural network or validate software tools. The authors of the PLOS Biology paper, "Nessys: A new set of tools for the automated detection of nuclei within intact tissues and dense 3D cultures" published in August 2019: https://doi.org/10.1371/journal.pbio.3000388, considered several image segmenation packages, but they did not use the approach described in this notebook. We will analyse the data using Cellpose and compare the output with the original segmentation produced by the authors. StarDist was not considered by the authors. Our workflow shows how public repository can be accessed and data inside it used to validate software tools or new algorithms. We will use an image (id=6001247) referenced in the paper. The image can be viewed online in the Image Data Resource (IDR). We will use a predefined model from Cellpose as a starting point. Steps to access data from IDR could be re-used if you wish to create a new model (outside the scope of this notebook). ## Launch This notebook uses the [environment_cellpose.yml](https://github.com/ome/EMBL-EBI-imaging-course-05-2023/blob/main/Day_4/environment_cellpose.yml) file. See [Setup](https://github.com/ome/EMBL-EBI-imaging-course-05-2023/blob/main/Day_4/setup.md)." assertion.
- d7444133-eaf9-4f60-86e6-c1f37c97126b description "IDR is based on OMERO and thus all what we show in this notebook can be easily adjusted for use against another OMERO server, e.g. your institutional OMERO server instance. The main objective of this notebook is to demonstrate how public resources such as the IDR can be used to train your neural network or validate software tools. The authors of the PLOS Biology paper, "Nessys: A new set of tools for the automated detection of nuclei within intact tissues and dense 3D cultures" published in August 2019: https://doi.org/10.1371/journal.pbio.3000388, considered several image segmenation packages, but they did not use the approach described in this notebook. We will analyse the data using Cellpose and compare the output with the original segmentation produced by the authors. StarDist was not considered by the authors. Our workflow shows how public repository can be accessed and data inside it used to validate software tools or new algorithms. We will use an image (id=6001247) referenced in the paper. The image can be viewed online in the Image Data Resource (IDR). We will use a predefined model from Cellpose as a starting point. Steps to access data from IDR could be re-used if you wish to create a new model (outside the scope of this notebook). ## Launch This notebook uses the [environment_cellpose.yml](https://github.com/ome/EMBL-EBI-imaging-course-05-2023/blob/main/Day_4/environment_cellpose.yml) file. See [Setup](https://github.com/ome/EMBL-EBI-imaging-course-05-2023/blob/main/Day_4/setup.md)." assertion.
- 91046403-e0b7-41d3-8d60-4b540219ffa7 description "The research object refers to the Deep learning and variational inversion to quantify and attribute climate change (CIRC23) notebook published in the Environmental Data Science book." assertion.
- 03b385c8-ded5-4683-99bf-90ffb6e82f92 description "Contains input Input dataset for paper used in the Jupyter notebook of Deep learning and variational inversion to quantify and attribute climate change (CIRC23)" assertion.
- 055ff8b9-abff-4e58-9cd8-a1637c6858c0 description "Contains outputs, (figures, models and results), generated in the Jupyter notebook of Deep learning and variational inversion to quantify and attribute climate change (CIRC23)" assertion.
- 33b2913e-3372-48ac-8cea-12d962e14259 description "Rendered version of the Jupyter Notebook hosted by the Environmental Data Science Book" assertion.
- 6a4cc43b-8ed5-4439-963f-d3b9dda90747 description "Conda environment when user want to have the same libraries installed without concerns of package versions" assertion.
- 7364b800-97c3-45a5-aca7-fde930cbe460 description "Jupyter Notebook hosted by the Environmental Data Science Book" assertion.
- 75411860-412a-4e54-b840-d71630afb179 description "Lock conda file of the Jupyter notebook hosted by the Environmental Data Science Book" assertion.
- 8930ebd1-3c8e-4592-b5ef-f69759d6826f description "Related publication of the modelling presented in the Jupyter notebook" assertion.
- b078ae9c-2ae3-4c10-8a86-673b55d2b274 description "The research object refers to the Learning the Underlying Physics of a Simulation Model of the Ocean's Temperature (CIRC23) notebook published in the Environmental Data Science book." assertion.
- 4f3a7e5c-335a-4b71-bed2-a4cce8941d0f description "Contains input MITgcm Dataset for paper: Sensitivity analysis of a data-driven model of ocean temperature (v1.1) used in the Jupyter notebook of Learning the Underlying Physics of a Simulation Model of the Ocean's Temperature (CIRC23)" assertion.
- 6bf74de0-59cb-4d01-a4dc-62cec9896270 description "Conda environment when user want to have the same libraries installed without concerns of package versions" assertion.
- 6cb8f287-259b-412a-aa53-07485623c426 description "Related publication of the modelling presented in the Jupyter notebook" assertion.
- 9a7b9a09-46d4-45ea-91c9-08ae4ec65397 description "Contains input Reproducible Challenge - Team 3 - Sensitivity analysis- Models used in the Jupyter notebook of Learning the Underlying Physics of a Simulation Model of the Ocean's Temperature (CIRC23)" assertion.
- b1b1e0b8-9887-4487-8545-9e677c9b9bf1 description "Rendered version of the Jupyter Notebook hosted by the Environmental Data Science Book" assertion.
- c7cb324f-4912-4af6-b448-90120daf2eff description "Lock conda file of the Jupyter notebook hosted by the Environmental Data Science Book" assertion.
- e2bb7c97-8729-4d0f-b513-108722a2baac description "Jupyter Notebook hosted by the Environmental Data Science Book" assertion.
- ec40274f-f5a7-4479-8976-e8f63909129b description "Contains outputs, (figures), generated in the Jupyter notebook of Learning the Underlying Physics of a Simulation Model of the Ocean's Temperature (CIRC23)" assertion.
- 87c92e66-6319-4e2b-8662-6c0e35039fd0 description "RO to verify checklists" assertion.
- d8fae2d3-7277-454b-b663-f8cd5d82b001 description "This Research Object has as a main artefact a presentation (slides) on The Carpentries approach to training. It gives an overview of The Carpentries initiatives, how they operate, how they collaboratively develop and maintain training materials, and how they train their instructors. The Research Object also contains additional links to other presentations and material of interest for learning more about The Carpentries or other similar initiatives." assertion.
- 04390d9a-7638-4f22-a598-355e2f916e8e description "Presentation given by Toby Hodges on 29 April 2021 to reflect on the First round of Lesson Development Study Groups. Toby explains what the training material on "Lesson Development Study Group" is about and how it helps The Carpentries community to co-develop training material." assertion.
- 1cf83eb7-a289-4347-9212-fb9bbfa5fdbc description "CodeRefinery is a community project where you can find Training and e-Infrastructure for Research Software Development." assertion.
- 230367d0-1b89-4424-8570-e0b94e4d4206 description "Galaxy is an open-source platform for data analysis that enables users to: 1) Use tools from various domains (that can be plugged into workflows) through its graphical web interface. Run code in interactive environments (RStudio, Jupyter...) along with other tools or workflows; 2) Manage data by sharing and publishing results, workflows, and visualizations; 3) Ensure reproducibility by capturing the necessary information to repeat and understand data analyses; 4) The Galaxy Community is actively involved in helping the ecosystem improve and sharing scientific discoveries." assertion.
- 2351aa9e-76a3-46d7-aac2-b0a3e6733470 description "A short overview of The Carpentries initiative, how they operate and collaboratively develop, maintain and deliver training on foundational coding and data science skills to researchers worldwide for researchers. Informal presentation given for the GO FAIR Foundation Fellow on July 27th 2023." assertion.
- 4c6fb2d0-a83f-485e-93b5-94826731ff46 description "The Carpentries website is the main page where one can find about The Carpentries initiative. You can find many other links from there, including the Carpentries training material." assertion.
- e8bd18cc-2d6f-4638-a0e0-72f0d4a64e78 description "Website where you can find all the training material for The Galaxy Project with many different topics." assertion.
- fd27e49f-c8c9-4512-b22d-e79b58d3c91e description "Presentation from The Carpentries Community on "The Carpentries Instructor Training" and on how to build skills in a community of practice." assertion.
- a66bbb17-5bfa-4ba1-9199-712bdfbd6b2a description "This Research Object has as a main artefact a presentation (slides) on The Carpentries approach to training. It gives an overview of The Carpentries initiatives, how they operate, how they collaboratively develop and maintain training materials, and how they train their instructors. The Research Object also contains additional links to other presentations and material of interest for learning more about The Carpentries or other similar initiatives." assertion.
- 44d55128-49c8-469c-9fb7-74d6dd6930ff description "Galaxy is an open-source platform for data analysis that enables users to: 1) Use tools from various domains (that can be plugged into workflows) through its graphical web interface. Run code in interactive environments (RStudio, Jupyter...) along with other tools or workflows; 2) Manage data by sharing and publishing results, workflows, and visualizations; 3) Ensure reproducibility by capturing the necessary information to repeat and understand data analyses; 4) The Galaxy Community is actively involved in helping the ecosystem improve and sharing scientific discoveries." assertion.
- 4cb1a1ff-4ed8-4236-b089-33def2f3cb36 description "A short overview of The Carpentries initiative, how they operate and collaboratively develop, maintain and deliver training on foundational coding and data science skills to researchers worldwide for researchers. Informal presentation given for the GO FAIR Foundation Fellow on July 27th 2023." assertion.
- 4cef586a-1a60-45f9-a1aa-0c2daf36c87c description "CodeRefinery is a community project where you can find Training and e-Infrastructure for Research Software Development." assertion.
- 6f18bdc9-ff67-4794-9fee-0c8d5e0a3bcc description "The Carpentries website is the main page where one can find about The Carpentries initiative. You can find many other links from there, including the Carpentries training material." assertion.
- 75e62d08-b3fb-4629-8d9b-679300234228 description "Presentation given by Toby Hodges on 29 April 2021 to reflect on the First round of Lesson Development Study Groups. Toby explains what the training material on "Lesson Development Study Group" is about and how it helps The Carpentries community to co-develop training material." assertion.
- b74debc0-bdf9-4612-ab43-be26268dd53b description "Website where you can find all the training material for The Galaxy Project with many different topics." assertion.
- f214bab9-f752-48a7-9164-69d57dfd9214 description "Presentation from The Carpentries Community on "The Carpentries Instructor Training" and on how to build skills in a community of practice." assertion.
- 7435ba71-48f2-4475-999e-cd818cc941ce description "With this pipeline we aim to provide users with the ability to train spatiotemporally robust machine learning models to detect and monitor wetlands and thus assessing their state over time. Wetlands play a vital role in the ecosystem, but also have critical influence on methane emissions. Methane is around 25 times as powerful in trapping heat in the atmosphere, but because it does not stay in the atmosphere as long, it more has a short-term influence on the rate of climate change. See also this news release by NOAA for more details. Wetlands have been one of the major drivers of methane in the atmosphere, acting as source instead of a sink while not being stable, including water stress as well as renaturation." assertion.
- 1cbd78cb-3300-45d0-9fcd-9f9323b0b14e description "Poster and associated video on Climate Science with Galaxy. It provides a short information about the status and roadmap. The goal is to demonstrate we are shifting from using Galaxy for answering scientific question to using Galaxy for answering societal issues. In particular, we are aiming at providing a way to monitor the progress of a given climate action undertaking at local, national or international levels." assertion.
- aa831010-4c65-4d28-a423-7cb5f877ac09 description "Youtube video explaining the poster. Poster and associated video on Climate Science with Galaxy. It provides a short information about the status and roadmap. The goal is to demonstrate we are shifting from using Galaxy for answering scientific question to using Galaxy for answering societal issues. In particular, we are aiming at providing a way to monitor the progress of a given climate action undertaking at local, national or international levels." assertion.
- 19f2cf29-c1c7-4abc-8443-354e7698bc86 description "Scientific paper about the application of the hyperspectral camera ECOTONE for underwater benthic habitat mapping in the Southern Adriatic Sea Cited as: Foglini, F.; Grande, V.; Marchese, F.; Bracchi, V.A.; Prampolini, M.; Angeletti, L.; Castellan, G.; Chimienti, G.; Hansen, I.M.; Gudmundsen, M.; et al. Application of Hyperspectral Imaging to Underwater Habitat Mapping, Southern Adriatic Sea. Sensors 2019, 19, 2261. https://doi.org/10.3390/s19102261" assertion.
- 053eea69-7fb7-4a53-a641-06bbb15a0961 description "Figure 1. (A) Location of the two sites, inset shows the position in the Mediterranean Sea; (B) the extension of the Bari Canyon CWC province (from [56]) and (C) the extension of the coralligenous in the Brindisi area (black lines indicate the ROV surveys). Habitat maps produced by the BIOMAP project and further updated within the CoCoNet project. (D) Example of CWC habitat complexity showing colonies of M. oculata and large fan-shaped sponges (from [38]); (E) example of coralligenous characterized by CCA and Peyssonelliales, serpulids and orange encrusting sponges overprinting the calcified red algae." assertion.
- 25cae486-8fcf-4d14-bba4-9dc1bf761318 description "Foglini, F.; Grande, V.; Marchese, F.; Bracchi, V.A.; Prampolini, M.; Angeletti, L.; Castellan, G.; Chimienti, G.; Hansen, I.M.; Gudmundsen, M.; et al. Application of Hyperspectral Imaging to Underwater Habitat Mapping, Southern Adriatic Sea. Sensors 2019, 19, 2261. https://doi.org/10.3390/s19102261" assertion.
- b97348a7-991f-46cf-9834-cf602bacf800 description "The scenario covers the whole Adriatic Sea, including the sources of marine litter coming from the whole of the Adriatic coastline. The water current transport is simulated by the hydrodynamic model under realistic forcing conditions. The dispersal of marine litter is calculated using a particle tracking model developed to take into account the sinking velocity of the particles, as well as the effect of the bottom rocky sea floor on particle bottom transport. Using this numerical approach, we performed simulations that model the dispersal of marine litter, from which we determined hotspots, where high concentrations of marine litter accumulate. The model results were used to screen for clean-up location, where the concentration of marine litter is high with respect to surrounding areas. A crucial aspect of the model is the characterization of the main sources of marine litter in the Adriatic Sea and in the Gulf of Venice. The public litter represents the first source of beached marine litter followed by non-sourced litter, aquaculture and fishing (mussel nets). In case of macrolitter seabed composition, the main sources are public activity (domestic and touristic waste), followed by shipping activity and fisheries. The macrolitter composition of the seabed is derived from the field surveys in the Gulf of Venice and is very similar to the beached marine litter composition. The data indicates a high spatial heterogeneity of marine litter related to a high variability, both in temporal distribution and abundance of each marine litter type." assertion.
- 2ec7a983-b9bd-4f1e-afdf-2e00246c7f62 description "Project deliverable" assertion.
- 67de3cb9-b0cd-4922-b1cf-40cf202b2895 description "Projekt dotyczy....<br>" assertion.
- fe70c5a9-9d7a-41f1-af1e-e0ec8acf79ce description "Tak Nie<br>" assertion.
- dd87ac68-c7b8-4641-a610-5587978d6ff5 description "Global warming and ocean acidification are predicted to impinge on deep sea ecosystems with deleterious consequences on their biodiversity and ecosystem services. More specifically, human-induced global climate changes may pose a major threat to cold water corals which are known to engineer some of the most complex, diverse and charismatic habitats at bathyal depths. Furthermore, these corals may suffer in the future from an unprecedented destruction of their habitat from additional pressures (e.g. bottom fishing, deep-sea mining), which may further modify their structure and function. However, the adaptive potential of cold water corals in response to increasing warming remains largely undocumented. In the Mediterranean Sea, the analysis of radiometrically-dated fossil specimens shows a general decline in the abundance of cold water corals that were exposed to temperatures higher than 14°C in the past. Considering that modern intermediate and deep Mediterranean waters are close or above 14°C, this suggests that most of the Mediterranean cold water coral species thrive today very close to their physiological threshold, , with deleterious consequences on deep-sea ecosystems and biodiversity. This multidisciplinary case study will target those Mediterranean cold water coral habitats that develop under thermal conditions very close to 14°C. Maps of the current cold water coral distribution in the Mediterranean Sea will be correlated to chemical and physical parameters (e.g. temperature, salinity, nutrients, dissolved oxygen concentration) from field data and modelling. The fossil records will be also analysed through U/Th dating and isotopic geochemistry, and the abundance of fossil corals will be correlated to proxy-based environmental reconstructions. Future scenarios will be provided based on regional models (e.g. Med-CORDEX) and will focus on the increase of temperature and consequent cold water coral habitat loss. The role of other parameters such as nutrient concentration and decreasing pH will be also investigated. This case study involves experts in paleoclimatology, paleoecology, physical and chemical oceanography, biology, ecology and modelling. Key hypothesis to be tested: the seawater temperature is the main environmental factor controlling the past, present and future distribution of cold water corals in the Mediterranean Sea." assertion.
- 1a9af765-caf3-4053-bbe6-e7f4b7240f1a description "This case study targets the Mediterranean cold water coral habitats that develop under thermal conditions very close to 14°C. Maps of the current cold water coral distribution in the Mediterranean Sea will be correlated to physical parameters (i.e. Temperature) from field data and regional model Med-CORDEX. the effect of the increase of temperature and its consequence on cold water coral habitat loss." assertion.
- 7a1fc890-bfc1-41dc-a241-ce06901bd054 description "Literature on the distribution of CWC in the Mediterranean Sea" assertion.
- f7609946-b35f-42ea-914b-d74cbf6e8579 description "Data on temperature, salinity, dissolved oxygen, pH, and nutrient concentrations in seawater used to explore how environmental variables influence the distribution of CWC in the Mediterranean Sea" assertion.
- a443bee6-0bb2-43eb-9428-906af2273a19 description "The Secure Generative Data Exchange (SGDE) tool assists AI developers by facilitating the training, collection, and distribution of data generators, mitigating challenges related to data access, transfer, privacy, and cost. SGDE allows training of these generators on edge devices where data is collected, enabling sharing of generators, and helping to create unlimited synthetic samples for new machine learning tasks. The SGDE protocol initiates on the edge device, subsequently uploading the trained generator to a central server; clients can then download these generators and create synthetic samples, reducing distribution biases, due to the equal class representation of synthetic data." assertion.
- 1835 description "" assertion.
- 1c9bfc94-dbb9-475e-af50-601bff9f6c0c description "This RO provides the ADAM collection of the Sentinel-1 dataset over Etna volcano based on the LiCSAR catalogue." assertion.
- cf84e531-56d3-43ee-8362-c340d0addf30 description "This case study targets the Mediterranean cold water coral (CWC) habitats that develop under thermal conditions very close to 14°C. Maps of the current cold water coral distribution in the Mediterranean Sea will be correlated to physical parameters (i.e. Temperature) from field data and regional model Med-CORDEX. Considering that intermediate and deep water in the Mediterranean sea temperature is around 14°C, this suggests that most of the cold water coral species in the region thrive very close to their physiological threshold. Future scenarios are provided based on regional models (e.g. Med-CORDEX). the objective is to investigate future scenarios and predictions in case of a an increased of temperature and its consequents on CWC habitat loss." assertion.
- 049cc3f2-2b96-4ac7-8776-c095c522f492 description "Seawater temperature from MEDCORDEX output model: CNRM-CM5, variable: monthly seawater temperature (thetao) from (1995-2005)" assertion.
- 137b72a2-daac-4f87-8df9-df6204ff1ec4 description "environment" assertion.
- 19523e42-6e90-4500-a738-b75d0c9804f5 description "map of Cold water corals spatial distribution with 14°C isotherm depth predictions from RCP 8.5 (2070-2100) scenarios" assertion.
- 34af0c5f-8e14-4d77-83f6-2be63542cb14 description "Seawater temperature from MEDCORDEX output model: CNRM-CM5, variable: monthly seawater temperature (thetao) RCP 8.5 (2070-2100)scenarios" assertion.
- 359e6d79-d0fa-42c3-812a-588c42720d80 description "the coastline of the Mediterranean sea" assertion.
- 3b0ec1d0-a90f-40c8-a4f1-98df08b92c77 description "CWC spatial distribution color-coded with depth against the varibility of the Isotherm 14°C depth upper panel: historical data (1995-2005) and lower panel: RCP 8.5 scenario (2070-2100)" assertion.
- 6fd4cc21-2dc3-41d9-96e4-0ddac96fdda1 description "Med-CORDEX initiative has been proposed by the Mediterranean climate research community as a follow-up of previous and existing initiatives. Med-CORDEX takes advantage of new very high-resolution Regional Climate Models (RCM, up to 10 km) and of new fully coupled Regional Climate System Models (RCSMs), coupling the various components of the regional climate" assertion.
- 9e3b4fd3-ab98-4611-8b6c-277fdda86c50 description "Jupyter notebook to plot Cold Water Corals and 14°C isotherm depth scenarios" assertion.
- b05dac24-5771-42b9-8e3d-acf5d0e7b87c description "Cold water coral location: longitude, latitude and depths" assertion.
- c1410daa-1c5b-4a64-928a-80e5570bd638 description "This python notebook will download automatically the files you need from the model https://www.medcordex.eu/ and create directories to save them." assertion.
- f15ca50f-edbd-4f30-aaf0-e6256532849e description "code#1: reads netcdf data of seawater temperature from MEDCORDEX output model: CNRM-CM5 variable: monthly seawater temperature (thetao) code#2: This script reads netcdf data of seawater temperature from MEDCORDEX output model: CNRM-CM5 variable: monthly seawater temperature (thetao) code#3: This script reads in 14°C isotherm depth from historical (1995-2005) and RCP 8.5 (2070-2100)scenarios and plot data" assertion.
- 9a0df9ca-1970-4edf-9815-4a2f15702046 description "The eruption of Etna 28 February 2021 was seen by the MODIS sensor during the passage of the satellite NASA-Terra alle 09:40 UTC. In this research object a test to extract the plume pixel using AOT retrieval at 0.55 micron for both ocean and land is performed." assertion.
- 6a31b3fb-6a4d-4731-9cca-4f5d33574dd2 description "This is test of DMP creation.This is the data management plan (DMP) of RELIANCE H2020 project. It outlines data that will be collected or generated during the project and discusses how it will be handled during and after the project lifetime. As part of the Open Research Data Pilot (ORDP), RELIANCE delivers a first version of the DMP at an early stage of the project. This first version focusses on already identified research data to be collected, used and/or produced by the project, particularly by our scientific communities as part of the implementation of their use cases, but also the user data collected by the RELIANCE services. This DMP will be updated regularly to reflect significant changes, e.g., new data is used/produced." assertion.
- 1b8f1898-7ae4-4847-aeb3-3519a062d0c8 description "The research object refers to the Sea ice forecasting using IceNet notebook published in the Environmental Data Science book." assertion.
- 0b538afb-ef06-4b1f-9f19-a7f4c0b927ca description "Lock conda file for linux-64 OS of the Jupyter notebook hosted by the Environmental Data Science Book" assertion.
- 770d97b4-b101-4af4-95d9-c94b72e16c1a description "Jupyter Notebook hosted by the Environmental Data Science Book" assertion.
- 77485763-376b-4b2e-be69-4b0a7094d718 description "Conda environment when user want to have the same libraries installed without concerns of package versions" assertion.
- 780441ef-e2fc-4b9d-968a-e6878460a7e1 description "Contains input Forecasts, neural networks, and results from the paper: 'Seasonal Arctic sea ice forecasting with probabilistic deep learning' used in the Jupyter notebook of Sea ice forecasting using IceNet" assertion.
- 7a1dc762-dcb5-4176-89c9-839be2492364 description "Contains outputs, (table and figures), generated in the Jupyter notebook of Sea ice forecasting using IceNet" assertion.
- 8a99ffd1-faa4-4e15-9dd5-908551341d99 description "Related publication of the modelling presented in the Jupyter notebook" assertion.
- b574c292-7294-4e31-9247-1092bdbe2de4 description "Lock conda file for osx-64 OS of the Jupyter notebook hosted by the Environmental Data Science Book" assertion.
- c9e0d4d8-8ea3-40bb-a655-29d06c09f9b2 description "Contains input Dataset for IceNet's demo notebook used in the Jupyter notebook of Sea ice forecasting using IceNet" assertion.
- e458161e-0fdc-4b9d-be31-8fa388ba409c description "Rendered version of the Jupyter Notebook hosted by the Environmental Data Science Book" assertion.
- fb3a8b1f-7132-4c0e-80c8-33ff294808da description "This RO provides the ADAM collection of the Sentinel-1 dataset over Iceland based on the LiCSAR catalogue." assertion.
- 3daef89c-f0cf-4d92-9dc1-045ae1442625 description "LICSAR interferograms dataset based on Sentinel-1 SAR data since 2016 over Iceland" assertion.
- 2b9f4a1a-d72d-4e02-bd58-8ee96e7a224d description "The research object refers to the Sea ice forecasting using IceNet notebook published in the Environmental Data Science book." assertion.
- 014e2516-2042-454e-8fa9-36c7e0f48fa5 description "Contains input Dataset for IceNet's demo notebook used in the Jupyter notebook of Sea ice forecasting using IceNet" assertion.
- 0eea17b9-0aca-4d38-b4c6-ec89972f1b1c description "Contains input Forecasts, neural networks, and results from the paper: 'Seasonal Arctic sea ice forecasting with probabilistic deep learning' used in the Jupyter notebook of Sea ice forecasting using IceNet" assertion.
- 49744ebb-2ee9-43ae-b19d-d5785c05050f description "Conda environment when user want to have the same libraries installed without concerns of package versions" assertion.
- 4d9a3542-997a-4c3f-b4ef-7f41c79cf950 description "Jupyter Notebook hosted by the Environmental Data Science Book" assertion.
- 6b333b2f-6744-41e0-9e89-1ccdda05828b description "Lock conda file for osx-64 OS of the Jupyter notebook hosted by the Environmental Data Science Book" assertion.
- a112ec16-7272-43d3-ae40-f6731b16247b description "Lock conda file for linux-64 OS of the Jupyter notebook hosted by the Environmental Data Science Book" assertion.
- c4857fe3-ec0b-4bed-b2a8-4cca7f0167db description "Rendered version of the Jupyter Notebook hosted by the Environmental Data Science Book" assertion.
- ca55eaa8-5f4d-4737-a83e-bed520aaee00 description "Contains outputs, (table and figures), generated in the Jupyter notebook of Sea ice forecasting using IceNet" assertion.
- e98edbc3-ddf4-432a-9632-662587d3c565 description "Related publication of the modelling presented in the Jupyter notebook" assertion.
- b7a61a55-7fcb-40ae-a2ae-595d346c02ab description "The field of Open Science has made scientists agree on the idea that data, workflows and services should be findable, accessible, interoperable, and thus optimally reusable (FAIR). These principles apply to Earth Science communities also, dealing with rapidly evolving natural phenomena. However, there is still a weakness regarding research sharing and re-use through the scientific community, due to lack of technological solutions and their long-term implementation. The H2020 Reliance project delivers a suite of innovative and interconnected services that extend European Open Science Cloud (EOSC) capabilities to support the management of the research lifecycle within Earth Science communities, Copernicus users, and beyond. The project has delivered three complementary technologies: Research Object, Data Cubes and AI-based Text Mining. ROHub (https://reliance.rohub.org/) is the Research Object management platform that implements these three technologies and enables researchers to collaboratively manage, share and preserve their research work. RoHub implements the full Research Object model and paradigm: resources associated to a particular research work are aggregated into a single FAIR digital object, and metadata relevant for understanding and interpreting the content is represented as semantic metadata that are user and machine readable." assertion.
- 03e3f3ce-db5f-4a59-a882-ad74fb8819cd description "sketch" assertion.
- afaa8f23-1d10-4fcf-8694-38e8f63873ad description "Abstract of the poster presented at EGU General assembly, Vienna, 25 April 2023" assertion.
- dafbc003-62e7-4ca4-949e-abb5b300cc2e description "poster" assertion.
- ab3f22c0-7006-4f8b-9a0c-f604654241d8 description "Demo app for state tagging approach for QA/QC of environmental data" assertion.
- 0894811f-c181-43d9-9f4d-934ee8aeec82 description "This R application is an implementation of state tagging approach for improved quality assurance of environmental data. The application returns state-dependent prediction intervals on input data. The states are determined based on clustering of auxiliary inputs (such as meteorological data) made on the same day. The method provides contextual information to assess the quality of observational data and is applicable to any point-based, daily time series observational data. To use this application, the user will need to input two separate csv files: one for state variables and the other for observations. This work was supported by the Natural Environment Research Council award number NE/R016429/1 as part of the UK-SCAPE programme delivering National Capability." assertion.
- 0b51eade-52ce-4ab3-9001-7967562198c9 description "This RO is created as part of the mini workshop on RoHub during CW23." assertion.