| title | Uploading storage and deployment data to the {% data variables.product.virtual_registry %} | ||
|---|---|---|---|
| intro | Associate packages and builds in your organization with storage and deployment data. | ||
| versions |
|
||
| contentType | how-tos | ||
| product | Organization accounts on any plan | ||
| permissions | Anyone with write access to an organization-owned repository | ||
| shortTitle | Upload linked artifacts | ||
| category |
|
The {% data variables.product.virtual_registry %} includes storage records and deployment records for artifacts that you build in your organization. Metadata for each artifact is provided by your organization using one of the following methods:
- A workflow containing one of {% data variables.product.company_short %}'s actions for artifact attestations
- An integration with Dynatrace, JFrog Artifactory, or {% data variables.product.prodname_microsoft_defender %}
- A custom script using the artifact metadata REST API
The available methods depend on whether you are uploading a storage record or a deployment record. For more information about record types, see AUTOTITLE.
You can upload a storage record by creating an artifact attestation or enabling an integration with JFrog Artifactory. If you don't want to use these options, you must set up a custom integration with the REST API.
You can upload a storage record for an artifact using {% data variables.product.github %}'s first-party actions for artifact attestations. You can do this in the same workflow you use to build the artifact. These actions create signed provenance and integrity guarantees for the software you build, as well as automatically uploading a storage record to the {% data variables.product.virtual_registry %}.
{% data reusables.actions.attestation-virtual-registry %}
For more information on using these actions, see AUTOTITLE.
If the artifact does not require attestation, or if you want to upload deployment records or additional storage metadata, see the following sections.
This two-way integration automatically keeps your storage records on {% data variables.product.github %} up to date with the artifact on JFrog. For example, attestations you create on {% data variables.product.github %} are automatically uploaded to JFrog, and promoting an artifact to production on JFrog automatically adds the production context to the record on {% data variables.product.github %}.
For setup instructions, see Get Started with JFrog Artifactory and GitHub Integration in the JFrog documentation.
For artifacts that do not need to be attested and are not stored on JFrog, you can create a custom integration using the Create artifact metadata storage record API endpoint. You should configure your system to call the endpoint whenever an artifact is published to your chosen package repository.
[!NOTE] If the artifact is not associated with a provenance attestation on {% data variables.product.github %}, the
github_repositoryparameter is mandatory.
If you monitor deployed workloads with Dynatrace or {% data variables.product.prodname_mdc_definition %}, you can use an integration to automatically sync deployment data to the {% data variables.product.virtual_registry %}. Otherwise, you must set up a custom integration with the REST API.
You can configure Dynatrace to send deployment records to {% data variables.product.github %} for container images running in your Dynatrace-monitored Kubernetes environments. Dynatrace maps deployed images to your repositories, then reports runtime context.
In addition, deployment records from Dynatrace can include runtime risk context, such as:
- Public internet exposure
- Sensitive data access
You can use this context in organization-level alert filtering and in security campaigns to prioritize remediation for alerts that affect internet-exposed or sensitive-data workloads.
For setup instructions, see {% data variables.product.prodname_GHAS %} security integration - Get Started in the Dynatrace documentation.
You can connect your {% data variables.product.prodname_mdc %} instance to your {% data variables.product.github %} organization. {% data variables.product.prodname_mdc %} will automatically send deployment and runtime data to {% data variables.product.github %}.
For setup instructions, see Quick Start: Connect your {% data variables.product.github %} Environment to {% data variables.product.prodname_microsoft_defender %} in the documentation for {% data variables.product.prodname_mdc %}.
{% data reusables.security.production-context-mdc-preview %}
The Create an artifact deployment record API endpoint allows systems to send deployment data for a specific artifact to {% data variables.product.github %}, such as its name, digest, environments, cluster, and deployment. You should call this endpoint whenever an artifact is deployed to a new staging or production environment.
[!NOTE] If the artifact is not associated with a provenance attestation on {% data variables.product.github %}, the
github_repositoryparameter is mandatory.
To check that a record has been uploaded successfully, you can view the updated artifact in your organization settings. See AUTOTITLE.
It is not possible to delete an artifact from the {% data variables.product.virtual_registry %}. However, you can update a storage record or deployment record to reflect an artifact's status. See AUTOTITLE.
You can upload data to the {% data variables.product.virtual_registry %} in the same workflow you use to build and publish an artifact.
In the following example, we build and publish a Docker image, then use the {% raw %}${{ steps.push.outputs.digest }}{% endraw %} output in the next step to generate a provenance attestation.
The attest action automatically uploads a storage record to the {% data variables.product.virtual_registry %} when push-to-registry: true is set and the workflow includes the artifact-metadata: write permission.
{% raw %}
env:
IMAGE_NAME: my-container-image
ACR_ENDPOINT: my-registry.azurecr.io
jobs:
generate-build:
name: Build and publish Docker image
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
attestations: write
packages: write
artifact-metadata: write
steps:
- name: Build and push Docker image
id: push
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83
with:
context: .
push: true
tags: |
${{ env.ACR_ENDPOINT }}/${{ env.IMAGE_NAME }}:latest
${{ env.ACR_ENDPOINT }}/${{ env.IMAGE_NAME }}:${{ github.sha }}
- name: Generate artifact attestation
uses: actions/attest@v4
with:
subject-name: ${{ env.ACR_ENDPOINT }}/${{ env.IMAGE_NAME }}
subject-digest: ${{ steps.push.outputs.digest }}
push-to-registry: true
{% endraw %}Alternatively, if you are not generating an attestation, you can call the artifact metadata API directly.
{% raw %}
env:
IMAGE_NAME: my-container-image
IMAGE_VERSION: 1.1.2
ACR_ENDPOINT: my-registry.azurecr.io
jobs:
generate-build:
name: Build and publish Docker image
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
packages: write
artifact-metadata: write
steps:
- name: Build and push Docker image
id: push
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83
with:
context: .
push: true
tags: |
${{ env.ACR_ENDPOINT }}/${{ env.IMAGE_NAME }}:latest
${{ env.ACR_ENDPOINT }}/${{ env.IMAGE_NAME }}:${{ github.sha }}
- name: Create artifact metadata storage record
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
jq -n --arg artifactName "${{ env.IMAGE_NAME }}" --arg artifactVersion "${{ env.IMAGE_VERSION }}" --arg artifactDigest "${{ steps.push.outputs.digest }}" '{"name": $artifactName, "digest": $artifactDigest, "version": $artifactVersion, "registry_url": "https://azurecr.io", "repository": "my-repository"}' > create-record.json
gh api -X POST orgs/${{ github.repository_owner }}/artifacts/metadata/storage-record --input create-record.json
shell: bash
{% endraw %}Once you have uploaded data, teams in your organization can use the context from storage and deployment data to prioritize security alerts. See AUTOTITLE.