Resource Center

Clear Measure provides resources to empower software leaders and developers in software delivery, fulfilling our vision to improve and inspire software teams worldwide.

One of our clients was spending five full days on every manual deployment — not because they lacked talent, but because their Octopus Deploy environment had never been properly assessed or updated since it was first stood up years earlier. Tentacles (Octopus's deployment agents) had accumulated. Integrations had drifted. And nobody had stopped long enough to ask whether any of it still made sense. If that sounds familiar, you're not alone. Aging, unexamined Octopus Deploy environments are one of the most consistent problems we see across enterprise teams in healthcare, financial services, insurance, energy, and beyond.

Here's how we help teams fix that — and what's possible on the other side.

 Phase I: Assessing Your Octopus Deploy Environment Before Migration When a client comes to us with an aging Octopus Deploy environment, we don't start by recommending an immediate upgrade. We start by looking carefully at what is already there. Our Octopus Deploy Migration Planning engagement — designed specifically for complex environments — gives teams a complete picture before a single migration step is taken. This includes:
  • Analysis of the existing Octopus Deploy instance — what version, what configuration, what integrations, what's working and what's fragile
  • Analysis of the software being deployed and the nature of the environments being deployed to (cloud, hybrid, on-prem)
  • Sequencing of major migration or upgrade steps so nothing falls through the cracks
  • Best practices recommendations tailored to your architecture
In Phase I, our team conducts a thorough technical analysis of the client's Octopus Deploy environment, assesses and prioritizes their existing application portfolio across multiple environments, evaluates future applications as candidates for migration, and produces a formal migration plan with documented steps and a timeline. Trying to determine whether to stay on-prem or move to the cloud? We will uncover the information you need to make a confident decision. The output of Phase I isn't just a document — it's the strategic foundation that makes Phase II possible without unnecessary risk. Phase II: Proof of Concept for Your Octopus Deploy Migration With the migration plan from Phase I, we move into Phase II: Proof of Concept Implementation. Rather than attempting a full migration all at once, we use the POC approach to validate assumptions, surface any environment-specific surprises, and demonstrate end-to-end success for one application before scaling. This single-application migration serves as the proving ground — testing the target Octopus Deploy environment setup, validating deployment pipelines, and building client team confidence in the approach. A well-run POC de-risks the broader migration, gives stakeholders something concrete to evaluate, and creates a replicable pattern your team can follow for every subsequent application. Phase III: Full Portfolio Migration with Octopus Deploy

Following Phases I and II, you'll have something most teams never start with: a validated migration pattern, a target environment that's been proven to work, and a team that has already done this once successfully. That changes everything about how the remaining portfolio gets migrated.

At that point, we'll present a full estimate for converting the remainder of your applications to Octopus Deploy. From there, you have options. Our team can execute the full migration on your behalf, or your team can take on the work directly with our experts advising alongside them. What doesn't change regardless of the path: your team will understand exactly how the work is done, with detailed documentation and hands-on advisement until they're fully confident managing the environment on their own.

Octopus Deploy Training: Up-Leveling Your Team for Long-Term Success

Deploying a new version of Octopus is only as valuable as your team's ability to use it well. That's why team enablement is built into every Clear Measure engagement — not treated as an afterthought.

We meet teams where they are. If your team is new to Octopus or needs to reset some ingrained habits, a focused 2-hour orientation gets everyone aligned quickly. For organizations ready to go deeper, a full day of platform engineering planning helps connect architecture decisions to long-term delivery goals. And for enterprise teams looking to build advanced expertise and deploy at scale, we offer pro-level training customized to your environment and roadmap.

As Clear Measure Chief Architect Jeffrey Palermo puts it:
"There isn't anything Octopus can't deploy. But if automated DevOps is new to your team, make sure to plan your platform engineering properly. Empower your team to establish quality, achieve stability, and increase speed of delivery."
Learn more about how we work with Octopus Deploy across industries and team sizes. What Our Clients Are Saying
"It was taking our team five days to do a proper manual deployment, so I decided it was time to move automation to the next level. By increasing the utilization of Octopus Deploys automation features from 10% to 80%, the company has increased productivity by over 84%. Now we have more efficiency and accuracy. It's a completely different deployment experience." — SVP of Operations, Alphapoint "Our old method of deployments was cumbersome on our IT team, and required significant time and stress. Clear Measure helped us set up an Octopus Deploy configuration that allows us to initiate mid-day deployments, saving time we would have normally spent after-hours to do a deployment." — Frontier
How Octopus Deploy Fits Into a Modern AI DevOps Architecture Upgrading Octopus Deploy is one piece of a larger picture. At Clear Measure, we view deployment tooling as a core component of a modern AI DevOps architecture: the interconnected system of pipelines, automation, feedback loops, and intelligence that allows engineering teams to deliver software reliably and rapidly.

When your Octopus Deploy instance is current, properly configured, and correctly integrated with your build servers and cloud environments, it becomes the foundation that makes AI-assisted delivery possible — automated validation, complete auditability, and the kind of deployment speed that lets engineers focus on architecture and innovation instead of firefighting. Without that foundation, even the best AI tooling has nothing reliable to build on.

To see what this looks like end-to-end, download our AI DevOps Architecture Poster — a print-ready reference designed for .NET and Azure engineering teams. Octopus Deploy Migration Results: Real Client Outcomes The numbers speak for themselves. In one engagement with a FinTech firm, new environments that previously took days to provision were up and running in 4 hours or less, features could be deployed on demand, and overall team productivity increased by 84%. Read the full case study: Optimized DevOps Roadmap to Deliver Faster Results In another engagement, a supply chain management company with over 200 employees eliminated tedious manual deployments entirely — gaining the flexibility to deploy mid-day without disruption, proactively catching errors before they reached production, and freeing their IT team from the after-hours grind that had become the norm. Read the full case study: Streamline Deployments and Reduce Cycle Time Both transformations started with the same foundational work — assessing what existed, planning the right path forward, and proving it out before scaling. The pattern holds across every industry we work in:
  1. Inspect the existing environment with honesty and rigor
  2. Plan the migration before touching anything
  3. Prove the approach with a contained POC
  4. Up-level the team to maximize their investment in Octopus Deploy
  5. Iterate through remaining applications with confidence
The result isn't just a newer version of Octopus Deploy. It's a team that understands their deployment platform, a pipeline that reflects current best practices, and an organization positioned to move faster with less risk. Start Your Octopus Deploy Migration Planning Today

If your team is running an older self-hosted Octopus Deploy instance and isn't sure where to start, the best first step is a clear-eyed look at what you actually have — an honest technical assessment that tells you where your environment stands, what risk is accumulating, and what a better future state looks like.

That's exactly what our Octopus Deploy Migration Planning engagement is designed to deliver. Explore our Octopus Deploy practice to learn more, or contact us to start the conversation.

97% of developers now use AI coding tools. But using a coding assistant and running an AI-driven development environment are two very different things — and confusing them is costing teams quality, maintainability, and time.
When most engineering teams talk about "using AI in development," they mean a developer has GitHub Copilot installed. Suggestions appear as they type. Some are useful, some aren't. The developer accepts what looks right and moves on. That's AI-assisted coding. It's useful. It's also only one small piece of what AI-driven development actually is — and conflating the two is one of the most common mistakes teams make when trying to modernize their delivery process. An AI coding assistant operates at the level of a single engineer writing a single file. It autocompletes functions, suggests variable names, and sometimes generates a block of boilerplate. The engineer is still responsible for every decision — the tool just types faster. An AI-driven development environment operates at the level of the entire delivery system. It changes how work is specified, how it's designed, how it's tested, how it's deployed, and how production health is monitored. The engineer's role shifts from authoring every line to reviewing, validating, and directing AI-generated output within a system designed to catch problems automatically.

AI-Assisted Coding

Autocompletes code inside the IDE. Speeds up authoring. The delivery system around it is unchanged — manual processes, manual testing, manual deployment.

AI-Driven Development

Automates requirements, design, code generation, testing, CI/CD, UAT, and production monitoring. The entire delivery system is built to support and validate AI-generated output.
"A coding assistant accelerates one engineer writing one file. An AI-driven development environment automates the system of delivering software."
According to GitHub's 2024 State of the Octoverse, 97% of developers now use AI coding tools in some capacity. Adoption has become the norm. But adoption alone doesn't produce the results teams expect — and in many cases, it makes things worse. When AI coding tools are dropped into an existing delivery process without changing the system around them, the results are predictable:
  • Generated code reflects the inconsistencies of the existing codebase — at higher volume and faster pace
  • Defects that would have taken a human time to introduce now appear in bulk
  • Velocity increases short-term while technical debt accumulates invisibly
  • The codebase becomes harder to maintain, not easier
The challenge has shifted from whether to adopt AI to how to do it without accumulating technical debt, degrading maintainability, or losing architectural coherence. That shift requires thinking about AI as a delivery system problem — not a tooling problem. A true AI-driven development environment automates work across every phase of the software delivery lifecycle. Here's what that looks like in practice:
Structured checklist templates and AI-generated specs replace unstructured discovery sessions. Analysts produce precise, machine-readable requirements that feed directly into technical design — no translation layer, no information lost.  
Architecture patterns decompose requirements into development tasks automatically. Design becomes repeatable rather than ad hoc — every feature follows the same structural logic, which is exactly what makes AI code generation reliable downstream.  
LLMs generate implementation code and test scenarios directly from design specs. Engineers review, validate, and extend — they are no longer authoring from scratch. The quality of this output depends entirely on the quality of phases 1 and 2 feeding into it.  
Fully automated pipelines handle build, multi-level testing, environment provisioning, UAT promotion, and production deployment. Every change — regardless of how fast it was generated — moves through the same quality gates before it reaches a customer.  
Telemetry is analyzed on a defined cadence — hourly, daily, weekly. AI surfaces anomalies and generates improvement suggestions automatically. The system doesn't just deliver software; it watches what happens after delivery and feeds that signal back into the process.
If your team is currently using AI coding tools without this surrounding infrastructure, you have the first piece of a much larger system. The tools are not wrong — the context they're operating in is incomplete. The good news is that building the system is a structured process. It starts with establishing a consistent architectural foundation, adds automated quality gates and pipeline hardening, and then layers in AI-assisted generation on top of an environment designed to validate and deploy that output safely.

Want the Full Framework?

Clear Measure's AI-driven development methodology covers the complete lifecycle — from readiness assessment and architectural standardization through full pipeline automation and production telemetry. See the full technical guide →
The distinction between AI-assisted coding and AI-driven development isn't academic. It's the difference between a tool that speeds up one engineer and a system that transforms how an entire team delivers software. Understanding where you are on that spectrum is the first step toward building something better.
Talk to a Clear Measure AI DevOps Architect. We'll assess your codebase, DevOps foundation, and delivery baseline and tell you honestly what your AI readiness looks like. Talk to an Architect
 
AI-driven software isn’t about replacing engineers—it’s about amplifying their effectiveness. By letting computers generate significant portions of code, projects can move at the pace of your ideas while using fewer manual hours. Work that would take an average developer two weeks of coding can be reduced to a single day with AI-driven development. This shift allows teams to deliver features quickly and with fewer people—creating better economics than traditional offshore development while avoiding communication gaps or delays.    To make that speed count, teams need to create clarity—clear goals, shared priorities, and a visible path. When everyone knows what’s being built, why it matters, and how success is measured, AI becomes a real accelerator instead of just a code generator.    This combination of AI-driven development and clarity allows teams to deliver features faster, reduce rework, and maintain high quality.  AI at Work in Software Builds  When specifications, UX details, technical design, and test plans are clear, AI tools can produce large portions of functionality in seconds. Engineers then refine and validate that output, ensuring the resulting software matches your standards and integrates cleanly into your systems.    Creating clarity early in these steps reduces confusion and rework later. By structuring work into well-defined stages—concept, design, implementation, validation, release—teams ensure AI is applied where progress stands and where AI fits best.    Unlocking New Economics  Organizations often think that outsourcing offshore is the only way to save on development costs. AI-driven development offers a different path—enabling teams to produce code faster, streamline processes, and reduce expenses, while maintaining high quality and reliable outcomes.    Smaller, well-scoped projects make that advantage even stronger. When the team understands the purpose and outcomes, AI-accelerated work stays focused, predictable, and aligned with business goals.    Features, Delivered Sooner  This planned approach accelerates delivery while freeing developers to focus on high-value work: defining the right features, improving user experience, and strengthening overall design.  Measuring what matters—quality, stability, speed, and outcomes—keeps everyone on the same page and builds trust. With clear metrics, AI-driven development doesn’t just move faster; it moves smarter.    Key Takeaways 
  • AI-assisted coding accelerates development cycles. 
  • Tasks that once took two weeks can now be completed in a single day. 
  • Projects can be delivered with fewer people, improving cost efficiency. 
  • Early detection and refinement minimize rework. 
  • Faster builds unlock innovative, intelligent features for your software. 

AI-Driven Development is changing how software teams think about designing, building, and delivering applications. By embedding AI into the development process, teams can move faster, reduce repetitive work, and make better architectural decisions, creating software that is maintainable and reliable. AI is not about replacing intelligence; it is automation that helps us move faster, not think for us.

The Role of AI in Modern Software Development
AI takes on repetitive or time-consuming tasks such as generating code patterns, refactoring, or analyzing code for potential issues. It does not make people smarter; it helps experienced teams move faster and stay focused on higher-level architecture and design decisions. When developers know what they are doing, AI accelerates their work. When they do not, it exposes gaps. The value comes from speed and consistency, not from intelligence.

Architecture Meets AI
Strong software architecture remains essential for scalable and maintainable applications. AI-driven development works alongside architectural best practices by:

Improving scalability and performance: By analyzing code and dependencies, AI helps identify slow areas and improve system design.
Enhancing collaboration: Developers, testers, and architects can use AI outputs to stay aligned with architecture and implementation strategies.

AI processes what is given to it and performs pattern matching, not reasoning. Integrating it into architecture-focused workflows results in faster builds and cleaner designs, but the intelligence still comes from the people using it.

Practical Benefits of AI-Driven Development
Teams see clear advantages from AI-driven development:

Faster coding and iteration: AI automates repetitive coding work so teams can focus on architecture.
Proactive problem detection: Pattern analysis identifies issues early, reducing rework.
Better code quality: Consistent, AI-assisted reviews help keep systems clean and maintainable.

Unlocking the Future of Software Delivery
AI-driven development is not a trend; it is a practical evolution in how software is built. There is no real intelligence here; it is automation that speeds up what skilled teams already know how to do. Combining AI with solid architectural practices enables organizations to deliver faster, more reliable software without the hype, focusing on accurate and efficient execution grounded in expertise.

One of the most promising applications of generative AI in today’s enterprise landscape is automating business processes. These workflows often involve nuanced decisions and inconsistent or unstructured data, making them difficult to automate with traditional logic-based systems. In this post, we’ll explore how to build a .NET agent using Semantic Kernel and Azure AI Foundry to process employee expense reports based on natural-language policies. This is a practical, hands-on example of how large language models (LLMs) can be embedded into real business applications. To access the code, you can find it on github here. AI Most organizations require employees to submit expense reports for reimbursement. These reports typically include:
  • A summary of expenses (meals, travel, lodging, etc.)
  • Receipts or supporting documents
  • Explanations or justifications
A human reviewer must then interpret the company’s expense policy to determine whether each report should be approved, denied, or escalated. Policies often contain natural-language rules like:
  • Meals must not exceed $75 per day
  • Receipts are required for expenses over $25
  • Travel must be pre-approved
  • Alcohol is not reimbursable
While these rules are easy for humans to understand, they’re tricky to encode in software. Legacy systems rely on structured input fields - dates, dropdowns, number boxes, etc. - to extract and validate data. These approaches struggle when inputs vary or include ambiguity. For example:
  • One user attaches a PDF receipt, another pastes in a screenshot
  • Justifications vary from bullet points to full paragraphs
  • Fields are left blank or inconsistently formatted
Maintaining and scaling these brittle rulesets is time-consuming and error-prone. What’s needed is a system that understands context and intent, not just structure. By combining Semantic Kernel with Azure-hosted language models, we can build a .NET agent that reads expense data, applies policy logic, and returns a clear recommendation - all using natural language. Benefits of this approach:
  • Works with inconsistent or unstructured input
  • Understands prose-style policies and nuanced reasoning
  • Returns human-readable summaries for transparency
We’ll start by defining a simple data structure to hold the agent’s recommendation.

public class ExpenseReportRecommendation
{
    public string EmployeeName { get; set; } = string.Empty;

    public DateTime ReportDate { get; set; }

    public decimal AmountReported { get; set; }

    public decimal ReceiptsTotal { get; set; }

    public string Recommendation { get; set; } = string.Empty;

    public string Summary { get; set; } = string.Empty;
}
The agent will use this class to report its findings - whether to approve, deny, or refer a report to a manager. Our agent will need access to two main tools:
  1. A function to retrieve the employee’s expense report
  2. A function to retrieve the current expense policy
Here’s the function to retrieve the report:

[KernelFunction(nameof(GetExpenseReport))]
[Description("Gets the expense report for an employee on the specified report date.")]
public JsonDocument? GetExpenseReport(string employeeName)
{
    var path = Path.Combine(AppContext.BaseDirectory, "Data", $"{employeeName}.json");
    if (!File.Exists(path))
    {
        return null;
    }
    var jsonContent = File.ReadAllText(path);
    var report = JsonDocument.Parse(jsonContent);
    return report;
}

[KernelFunction(nameof(GetExpensePolicyAsync))]
[Description("Gets the travel expense policy for the organization.")]
public async Task<string> GetExpensePolicyAsync()
{
    var fullPath = Path.Combine(AppContext.BaseDirectory, PolicyPath);
    var policy = await File.ReadAllTextAsync(fullPath);
    return policy.Trim();
}
And here's the function to load the policy:

[KernelFunction(nameof(GetExpensePolicyAsync))]
[Description("Gets the travel expense policy for the organization.")]
public async Task GetExpensePolicyAsync()
{
    var fullPath = Path.Combine(AppContext.BaseDirectory, PolicyPath);
    var policy = await File.ReadAllTextAsync(fullPath);
    return policy.Trim();
}
In a production environment, these could connect to a database, SharePoint site, or cloud storage. Now we create a ChatCompletionAgent, wiring in the tools and specifying how it should interpret the data. This includes system instructions and the output format.

var kernel = Kernel.CreateBuilder()
    .AddAzureOpenAIChatCompletion(deploymentName, endpoint, apiKey)
    .Build();

kernel.Plugins.AddFromType<ExpenseReportTools>(nameof(ExpenseReportTools));

_chatCompletionAgent = new ChatCompletionAgent
{
    Name = "Expense-Agent",
    Description = "Agent for expense report processing.",
    Instructions = $"""
You are a expense report processing agent.
Apply the organization's expense policy to recommend if expense reports should be approved, denied, or referred to a manager.
'Approve' means the total amount matches the receipts and is within policy limits and rules.
'Deny' means the total amount does not match the receipts, exceeds policy limits, or violates rules.
'Refer' means the expense report requires further review by a manager.
Return json with the schema:
{JsonSerializerOptions.Default.GetJsonSchemaAsNode(typeof(ExpenseReportRecommendation))}
""",
    Kernel = kernel,
    Arguments = new KernelArguments(
        new OpenAIPromptExecutionSettings
        {
            FunctionChoiceBehavior = FunctionChoiceBehavior.Required()
        })
};
We also define a method to invoke the agent and parse the result:

public async Task<ExpenseReportRecommendation?> ProcessExpenseReportAsync(string employeeName)
{
    var chatMessage = new ChatMessageContent(AuthorRole.User, $"Process the expense report: {employeeName}.");

    await foreach (ChatMessageContent chatMessageContent in _chatCompletionAgent.InvokeAsync(chatMessage))
    {
        var response = chatMessageContent.Content ?? string.Empty;

        if (!response.StartsWith("{"))
        {
            continue;
        }

        var expensesReportDecision = JsonSerializer.Deserialize<ExpenseReportRecommendation>(response);
        return expensesReportDecision;
    }

    throw new InvalidOperationException("Failed to process expense report");
}
This method sends a request to the agent and parses the returned JSON into our predefined class. Let's try it out by evaluating reports for two employees:

var expenseAgent = new Agent(deploymentName, endpoint, apiKey);

var employees = new[] { "Alex", "Sam" };

foreach (var employee in employees)
{
    var recommendation = await expenseAgent.ProcessExpenseReportAsync(employee);

    ArgumentNullException.ThrowIfNull(recommendation, nameof(ExpenseReportRecommendation));

    Console.WriteLine($"Recommendation for {employee}:");
    Console.WriteLine($"Employee Name: {recommendation.EmployeeName}");
    Console.WriteLine($"Report Date: {recommendation.ReportDate.ToShortDateString()}");
    Console.WriteLine($"Amount Reported: {recommendation.AmountReported:C}");
    Console.WriteLine($"Receipts Total: {recommendation.ReceiptsTotal:C}");
    Console.WriteLine($"Recommendation: {recommendation.Recommendation}");
    Console.WriteLine($"Summary: {recommendation.Summary}");
    Console.WriteLine("-------------------------------------------------------");
}

Console.WriteLine("Press any key to continue...");
Console.ReadKey();
In this example:
  • The agent recommends approving Alex's report.
  • Sam's report is denied due to a discrepancy between the total reported and the attached receipts.
Try modifying the policy to allow small discrepancies or relax other rules - and see how the agent adapts its reasoning accordingly. This isn't just automation - it's decision automation. The agent:
  • Reads semi-structured or natural language inputs
  • Interprets human policies
  • Produces explainable, auditable decisions
It shows how large language models can act as reasoning engines for enterprise workflows, delivering decisions that are both scalable and accurate - all within your .NET environment.
Learn how to build a RAG pipeline in .NET using SQL Server 2025 vector search, Azure AI Vision, and Azure AI Foundry to power intelligent semantic photo search.
Boost software development productivity using AI tools like GitHub Copilot and LLMs. Learn how developers can leverage AI today for real-world impact.
AI won’t replace developers—it’ll boost them. Learn how clean architecture and leadership can help you thrive with AI in software development.
Learn key lessons from the CrowdStrike outage to prevent tech failures, improve system stability, & enhance cloud infrastructure resilience.

Filters

The Five Pillars
The Five Pillars
Show more
Content Type
Content Type
Show more
Category
Category
Show more