{"meta":{"title":"Responsible use of GitHub Code Quality","intro":"Use GitHub Code Quality responsibly by understanding its purposes, capabilities, and limitations.","product":"Security and code quality","breadcrumbs":[{"href":"/en/enterprise-cloud@latest/code-security","title":"Security and code quality"},{"href":"/en/enterprise-cloud@latest/code-security/responsible-use","title":"Responsible use"},{"href":"/en/enterprise-cloud@latest/code-security/responsible-use/code-quality","title":"Code quality"}],"documentType":"article"},"body":"# Responsible use of GitHub Code Quality\n\nUse GitHub Code Quality responsibly by understanding its purposes, capabilities, and limitations.\n\n> \\[!NOTE]\n> GitHub Code Quality is currently in public preview and subject to change.\n> During public preview, Code Quality will not be billed, although Code Quality scans will consume GitHub Actions minutes.\n\n## About GitHub Code Quality\n\nGitHub Code Quality helps users improve code reliability, maintainability, and overall project health by surfacing actionable feedback and offering automatic fixes for any findings in pull requests and on the default branch.\n\nWhen you enable Code Quality, two types of analysis run:\n\n* **CodeQL quality queries** run using code scanning analysis and identify problems with the maintainability, reliability, or style of code. This runs on changed code in all pull requests against the default branch. It also runs periodically on the full default branch.\n\n* **Large Language Model (LLM)-powered analysis** provides additional insights into potential quality concerns beyond what is covered by deterministic engines like CodeQL. This runs automatically on files changed in recent pushes to the default branch. These findings are displayed in Code Quality's **AI findings** dashboard, under the **<svg version=\"1.1\" width=\"16\" height=\"16\" viewBox=\"0 0 16 16\" class=\"octicon octicon-shield\" aria-label=\"shield\" role=\"img\"><path d=\"M7.467.133a1.748 1.748 0 0 1 1.066 0l5.25 1.68A1.75 1.75 0 0 1 15 3.48V7c0 1.566-.32 3.182-1.303 4.682-.983 1.498-2.585 2.813-5.032 3.855a1.697 1.697 0 0 1-1.33 0c-2.447-1.042-4.049-2.357-5.032-3.855C1.32 10.182 1 8.566 1 7V3.48a1.75 1.75 0 0 1 1.217-1.667Zm.61 1.429a.25.25 0 0 0-.153 0l-5.25 1.68a.25.25 0 0 0-.174.238V7c0 1.358.275 2.666 1.057 3.86.784 1.194 2.121 2.34 4.366 3.297a.196.196 0 0 0 .154 0c2.245-.956 3.582-2.104 4.366-3.298C13.225 9.666 13.5 8.36 13.5 7V3.48a.251.251 0 0 0-.174-.237l-5.25-1.68ZM8.75 4.75v3a.75.75 0 0 1-1.5 0v-3a.75.75 0 0 1 1.5 0ZM9 10.5a1 1 0 1 1-2 0 1 1 0 0 1 2 0Z\"></path></svg> Security and quality** tab of the repository.\n\nWhen a quality issue is detected by either type of analysis, **Copilot Autofix** suggests a relevant fix that can be reviewed and applied by developers.\n\nOn pull requests, Code Quality results are displayed as comments left by the `github-code-quality` bot, which includes a suggested autofix wherever possible.\n\n## LLM-powered analysis for recent pushes\n\nAfter each push to the default branch, the LLM analyzes recently changed files for maintainability, reliability, and other quality issues. Code Quality inspects your code and provides feedback using a combination of natural language processing and machine learning.\n\n### Input processing\n\nThe code changes are combined with other relevant, contextual information to form a prompt, and that prompt is sent to a large language model.\n\n### Language model analysis\n\nThe prompt is then passed through the Copilot language model, which is a neural network that has been trained on a large body of text data. The language model analyzes the input prompt.\n\n### Response generation\n\nThe language model generates a response based on its analysis of the input prompt. This response can take the form of natural language suggestions and code suggestions.\n\n### Output formatting\n\nThe response generated by Code Quality is presented to the user directly, providing code feedback linked to specific lines of specific files. Where Code Quality has provided a code suggestion, the suggestion is presented as a suggested change, which can be applied with a couple of clicks.\n\n## GitHub Copilot Autofix suggestions\n\nOn pull requests, Code Quality results found by code scanning analysis send input to the LLM. If the LLM can generate a potential fix, the `github-code-quality` bot posts a comment with a suggested change directly in the pull request.\n\nIn addition, users can request autofix generation for results in the default branch.\n\nFor more information on the suggestion generation process for GitHub Copilot Autofix, see [Responsible use of Copilot Autofix for code scanning](/en/enterprise-cloud@latest/code-security/code-scanning/managing-code-scanning-alerts/responsible-use-autofix-code-scanning).\n\n## Use case for GitHub Code Quality\n\nThe goal of GitHub Code Quality is to:\n\n* Surface code quality issues across your repository, so developers and repository administrators can quickly identify, prioritize and report on areas of risk.\n* Accelerate remediation work by offering Copilot Autofix suggestions for results found by scans of the default branch, as well as for findings in recent pushes to the default branch.\n* Quickly provide actionable feedback on a developer's code. On pull requests, Code Quality combines information on best practices with details of the codebase and findings to suggest a potential fix to the developer.\n\n## Improving the performance of GitHub Code Quality\n\nIf you encounter any issues or limitations with suggested fixes on pull requests, we recommend that you provide feedback by using the thumbs up and thumbs down buttons on the `github-code-quality` bot's comments. This can help GitHub to improve the tool and address any concerns or limitations.\n\n## Limitations of GitHub Code Quality\n\n### Limitations of Code Quality's LLM-powered analysis\n\nCode Quality's LLM-powered analysis uses the same underlying language model and analysis engine as GitHub Copilot code review. Therefore, it shares similar limitations when analyzing code quality. Key considerations include:\n\n* Incomplete detection\n* False positives\n* Code suggestion accuracy\n* Potential biases\n\nFor detailed information about these limitations, see [Responsible use of GitHub Copilot code review](/en/enterprise-cloud@latest/copilot/responsible-use/code-review).\n\nYou should always review the findings surfaced by GitHub Code Quality's LLM-powered analysis to verify their accuracy and applicability to your codebase.\n\n### Limitations of Copilot Autofix\n\nCopilot Autofix for Code Quality findings won't be able to generate a fix for every finding in every situation. The feature operates on a best-effort basis and is not guaranteed to succeed 100% of the time.\n\nWhen you review a suggestion from Copilot Autofix, you must always consider the limitations of AI and edit the changes as needed before you accept the changes. You should always carefully review and verify Copilot Autofix suggestions before applying them.\n\nFor more information on the limitations of Copilot Autofix, the quality of Copilot Autofix suggestions, and the best way to mitigate its limitations, see [Responsible use of Copilot Autofix for code scanning](/en/enterprise-cloud@latest/code-security/code-scanning/managing-code-scanning-alerts/responsible-use-autofix-code-scanning)\n\n## Provide feedback\n\nYou can provide feedback on GitHub Code Quality in the [community discussion](https://github.com/orgs/community/discussions/177488).\n\n## Next steps\n\nSee how GitHub Code Quality works on your default branch to surface code quality issues and help you understand your repository's code health at a glance. See [Quickstart for GitHub Code Quality](/en/enterprise-cloud@latest/code-security/code-quality/get-started/quickstart)."}