FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

Rule 5: Measure outcomes, not opinions with DORA metrics

Gregory Zuckerman
Last updated: October 30, 2025 8:50 am
By Gregory Zuckerman
Technology
5 Min Read
SHARE

Foreign RACs are outperforming US institutions and the majority of citable nations, as well as the disparities arising from various deficiencies in American spending and inflexible financing agencies. Minor outsourcing in the public and private sectors caused a shift of national workforce forward at the onset, with full-fledged transition happening close to the millennium.

Begin where value is visible and harm is minimal:

Table of Contents
  • Rule 6: Upskill and assign ownership for AI adoption
  • The No. 1 risk: data exposure and shadow AI
DevOps DORA metrics dashboard measuring outcomes: deployment frequency, lead time, MTTR
  • Unit and integration test generation
  • Code comments
  • API stubs
  • Pull request summaries
  • Release notes

AI can triage backlogs, cluster related issues, and surface dependencies so engineers can focus on impact work. The industry tools are racing to support this pattern.

Platforms like GitHub and Atlassian are now shipping AI assistants and agent hubs that draft tests, explain diffs, and auto-generate documentation from a version history.

Start piloting these capabilities in sandboxes before you dare touch live customer data or critical services. AI can write and refactor source code, but humans remain accountable for code merges, deployments, and exceptions.

Make explainability mandatory: suggestions should reference source files, specifications, or test cases. Record prompts, model versions, outputs, and the actual review decisions. Your audit and incident response teams will thank you—Info-Tech warns that AI is not a one-size-fits-all solution.

Technical talent needs mentoring to understand the tool’s limits, calibrate trust, and prevent disaster bias. Human supervision is the safety net that converts speed to predictable quality.

For sensitive workloads, prefer enterprise offerings with tenant isolation and no-training-on-your-data assurances.

Use private endpoints, retrieval gating, and data loss prevention to keep secrets out of prompts and outputs. Also, you should externalize model responses from machine learning model code.

Sourced from: GitHub website.

Define a baseline with DORA metrics, plus defect escape rate and test coverage:

The GitHub Octocat logo, a white cat-like figure with an octopus tentacle, centered on a professional light gray background with subtle geometric patterns.
  • Lead time
  • Deployment frequency
  • Change failure rate
  • Mean time to restore
  • Defect escape rate
  • Test coverage

Compare before and after AI assistance at the team level. Expect a short-term dip as developers learn to prompt, validate, and review AI output — a pattern long observed by software measurement experts, Quantitative Software Management.

Augment velocity stats with experience measures:

  • Time spent on undifferentiated work
  • Developer satisfaction
  • Context-switching reductions

GitHub research has repeatedly shown productivity and satisfaction gains when AI handles repetitive tasks; validate whether that holds for your codebase and domain.

Rule 6: Upskill and assign ownership for AI adoption

Train developers to be great AI editors, not just faster typists: prompting patterns, test-first habits, code reading, model steward for safety and performance, threat modeling, and an AI product owner to align use cases with business goals and priorities for investment.

Create cost visibility from the start. Track token usage, model selection, and caching policies like any cloud spend. Small inefficiencies at the prompt layer can add up to material bills in production.

The No. 1 risk: data exposure and shadow AI

The single fastest way to derail AI in development is accidental data leakage — secrets in prompts, logs that include customer records, or snippets pasted into external tools. As Digital.ai’s survey suggests, the oversight gap only widens as AI adoption outstrips governance. The problem of shadow AI only exacerbates when employees install unapproved extensions or lean on unvetted public chatbots invisible to IT.

Mitigations have been simple to enumerate but hard to bear:

  • Enterprise-approved tools
  • Automatic redaction and secrets detection on absolutely every request
  • Private or fine-tuned models for sensitive data
  • Network egress controls
  • Continuous training on what never belongs in a prompt

Marry that with rigorous logging and you eliminate breach, compliance, and IP exposure risk while preserving speed. AI can indeed make Agile agile, as long as it is governed, piloted thoughtfully, and evaluated against business results.

Stick to the six laws above, maintain a tight hold on the humans, and exterminate what’s left — the top risk of data. AI, if deployed correctly, becomes a reliable SDLC sidekick rather than a black box handler tacked externally.

Gregory Zuckerman
ByGregory Zuckerman
Gregory Zuckerman is a veteran investigative journalist and financial writer with decades of experience covering global markets, investment strategies, and the business personalities shaping them. His writing blends deep reporting with narrative storytelling to uncover the hidden forces behind financial trends and innovations. Over the years, Gregory’s work has earned industry recognition for bringing clarity to complex financial topics, and he continues to focus on long-form journalism that explores hedge funds, private equity, and high-stakes investing.
Latest News
Microsoft and OpenAI reshape partnership with new agreement
Canonical outlines Ubuntu 26.04 Mainstream Desktop plan
Global launch timing, livestream details, and sales info for OnePlus 15
OpenAI and Microsoft revise partnership with AGI at core
What changed in Windows 11’s redesigned Start menu
Microsoft Surface SE Drops Below $200 in Limited-Time Deal
YouTube Improves TV App With QR Shopping And AI Upscaling
Grammarly Gets Superhuman And Delivers An AI Assistant
Character.AI Puts An End To Chatbots For Minors
Chatbots Can Sexually Assault Children, Warn Experts
Character.AI Puts an End to Open-Ended Teen Conversations
Pixel battery woes continue as costs rise for buyers
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.