MD.2 - Short title of the AI. optional
MD.3 - Short description of the AI.
MD.4 - Stable ID or URL to a paper where the AI is described. optional
MD.5 - Keywords relevant for the AI.
MD.6.x.1 - Name of the author.
MD.6.x.2 - Name of the institution.
MD.6.x.3 - Email address of the author.
MD.6.x.4 - ORCID iD of the author. optional
MD.7.x - Funding details relevant for the AI.
MD.8 - Specify whether the AI should appear in the search.
MD.9.x - Other report or checklist
MD.9.x.1 - Select the type of document.
MD.9.x.2 - Upload a report or checklist.
P.1 - What is your AI designed to learn or predict?
P.2.1 - Does your AI predict a surrogate marker?
P.2.2 - More detailed information about the surrogate marker.
P.3.1 - To which category does your AI problem belong?
P.3.2 - More detailed information
D.x.1 - What is the type of the data?
D.x.2.1 - Is the data real or simulated?
D.x.2.2 - How was the data simulated?
D.x.2.3 - Did you have to ask for an ethics committee approval before collecting the data?
D.x.3.1 - Is the data publicly available?
D.x.3.2 - Where can the data be found?
D.x.3.3 - How can the data be requested?
D.x.4 - Is this data used for training?
D.x.5.1 - Did you check if the data is subject to biases?
D.x.5.2 - How did you check for biases and what was your conclusion?
D.x.6 - Samples and features
D.x.6.1 - How many samples does the dataset have?
D.x.6.2 - How many features does the dataset have?
D.x.7.1 - How did you pre-process your data? optional
D.x.7.2 - Elaborate how you have performed pre-processing of this data.
M.1 - Which AI or mathematical methods did you use and how did you select them?
M.2 - How did you select your method’s hyper-parameters?
M.3.1 - Which test metrics do you report? optional
M.3.2 - Which additional metrics do you report?
M.4.1 - Did you take measures to prevent overfitting?
M.4.2 - Select how you prevented overfitting.
M.4.3 - Elaborate on how you prevented overfitting.
M.5.1 - Did you check if there are specific trigger situations (e.g. confounding factors) that induce your AI to fail in its task?
M.5.2 - Elaborate on whether there are trigger situations.
M.6.1 - Did you check whether randomized steps in your AI affect the stability of the results?
M.6.2 - Elaborate on how randomized steps affect the stability.
M.7.1 - Did you compare against a simple baseline model?
M.7.2 - Elaborate on how you compared against a baseline model.
M.8 - State-of-the-art approaches
M.8.1 - Did you compare against state-of-the-art approaches?
M.8.2 - Elaborate on how you compared against state-of-the-art approaches.
R.1.1 - Do you provide all means (including dependencies) to easily re-run your AI?
R.1.2 - Which means for re-running your AI do you provide?
R.1.3 - Elaborate on additional means you provide.
R.2.1.1 - Is the source code of your AI publicly available?
R.2.1.2 - Specify where the source code can be found.
R.2.1.3 - Have you used a source code management tool (e.g., Git)?
R.2.1.4 - Under which license did you publish the code?
R.2.2.1 - Is the source code of your data simulation publicly available?
R.2.2.2 - Specify where the source code can be found.
R.2.3 - Pre-processing pipeline
R.2.3.1 - Is the source code of your pre-processing pipeline publicly available?
R.2.3.2 - Specify where the source code can be found.
R.3.1 - Do you provide a pre-trained model?
R.3.2 - Specify where the pre-trained model can be found.
R.4 - Execution environment
R.4.1 - Operating systems
R.4.1.1 - On with platforms can your AI be run as is? optional
R.4.1.2 - More detailed information about the supported platforms.
R.4.2 - Computing resources
R.4.2.1 - Does your AI need computing resources that exceed those of a regular personal computer?
R.4.2.2 - Specify the computing resources required to run your AI.
PR.1 - Training on sensitive data
PR.1.1 - Does your method produce a model possibly containing data points or parts of data point of the training data?
PR.1.2 - Elaborate on possible data traces in your model or why there cannot be any.
PR.1.3 - Privacy-preserving techniques
PR.1.3.1 - Which privacy-preserving techniques did you apply during local model training? optional
PR.1.3.2 - Elaborate on how you applied privacy-preserving techniques.
PR.2 - Federated learning
PR.2.1 - Is your model being trained in a federated fashion, i.e. with other participants?
PR.2.2 - Which data is shared with other participants?
PR.2.3 - Mode of communication
PR.2.3.1 - Which types of communication do you use between participants?
PR.2.3.2 - Which other mode of communication did you use to collectively train the model?
PR.2.4 - Privacy-preserving techniques
PR.2.4.1 - Which privacy-preserving techniques did you apply during the federated training? optional
PR.2.4.2 - Elaborate on how you applied federated privacy-preserving techniques.
PR.2.5 - How does the transmitted data relate to the training data (over all iterations)?
EP.1.1 - Is there a measure or procedure to capture the joint importance of multiple loci?
EP.1.2 - Which measure or procedure was applied?
EP.1.3 - Can it be converted into an OR/RR?
EP.2 - Affected biological levels
EP.2.1 - Does your AI account for multiple affected biological levels?
EP.2.2 - Elaborate on the affected biological levels.
Legend
ConditionalRadio selectionDropdown selectionCheckboxesTagsFileAffects reproducibility scoreAffects validation scoreAffects privacy score