Jeffrey Palermo, CTO & Chairman of Clear Measure, presented the AI Software Factory, an executive-level architectural pattern for orchestrating software delivery from idea to production. He opened by addressing a core problem: software delivery has become the constraint in most organizations, and teams can't simply work faster when defects and production incidents are constantly consuming capacity. Poorly engineered AI adoption doesn't solve this, it just ships bugs faster. The AI Software Factory is the next evolution following Agile, DevOps, and cloud adoption, orchestrating people, processes, and automation across the entire delivery lifecycle.
A key theme throughout was that visibility must come before automation. Using a live Kanban board demo and a real client project, Jeffrey showed how a weekly scorecard tracking throughput, mean time to delivery, escape defects, and production incidents reveals bottlenecks and process gaps that would otherwise stay hidden. From there, AI automation is introduced intentionally, starting with simple, low-risk tasks, and always measured against the scorecard to confirm real improvement. Clear Measure's goal is to help organizations build software delivery systems that safely exploit AI without destabilizing their business.
A production support team at a digital-first insurance company was spending significant time on repetitive manual tasks like resolving import failures, investigating logs, and refining tasks. These issues occurred multiple times per week, with import failures taking about an hour each, log investigations around two hours per incident, and task refinement requiring several developer hours weekly. This limited their ability to focus on higher-value work, and AI adoption across teams was initially limited.
Clear Measure helped address these challenges by introducing Cursor for repeated tasks, sharing skills and workflows with the team, and providing training for developers, team leads, BSAs, and QA. Time was also allocated to rebuild workflows using Cursor. As a result, the team saw major efficiency gains: import failures dropped to about 15 minutes, log investigations to around 20 minutes, and task refinement to 1–2 hours per week. Issue investigation time decreased, several production issues were resolved the same day, and AI adoption continued to grow across the organization.
The AI knowledge gap is why most engineering teams aren’t seeing results.
AI budgets are up, but under scrutiny to show real & measurable business results. Expectations are high, especially given the AI hype. And yet most teams are not delivering the improvements and productivity gains of leadership expected when they approved those investments.The issue isn’t the technology.
It’s how teams are set up to use it.
The organizations that capture real value from AI are not the ones that moved fastest to adopt tools. They're the ones who had strong engineering fundamentals in place before AI entered the picture, including fast CI pipelines, automated testing, stable deployments, and real observability. When those foundations exist, AI accelerates everything. When they don't, AI adds noise. Developers spend time managing tool outputs instead of shipping features. Pilots stall. Leadership loses confidence. ROI never materializes. More spending on AI tools does not automatically create more value. What creates value is building the system around AI intentionally, starting with quality and stability, and treating automation as something you earn, not something you install. The tools are not bottlenecked. Your engineers are being handed AI-powered coding assistants and asked to figure it out. Most organizations have not built a learning environment, workflow integration, or clear standards that would allow their teams to use these tools effectively and confidently. The result is exactly what you'd expect: some engineers find ways to make AI work for them; most don't, and the augmented capabilities of leadership expected never show up across the team. Closing this gap requires deliberate investment in your people and not just licenses and subscriptions, but structured training, peer learning, and a delivery framework that ensures AI adopts a system-level capability rather than an individual experiment. That's where Clear Measure can help. The monthly AI Software Architect Forum is a peer-led conversation guided by The Five Pillars, to help engineering leaders with the real challenges of AI adoption, team performance and software delivery. Led by Jeffrey Palermo, Clear Measure's Chief Architect and a 13-time Microsoft MVP, this is a place for candid discussion with peers who are navigating the same decisions you are. No vendor pitches. No slides. Just a focused conversation about what is actually working. Register for the Next Forum → For teams ready to go deep, the Advanced .NET Bootcamp is three days of hands-on, practitioner-led training covering modern .NET architecture, DevOps fundamentals, and AI-driven development. This bootcamp teaches the attendees how to build a delivery system that AI can actually improve. The curriculum covers the engineering fundamentals that make AI adoption stick, including CI architecture, automated testing, stable deployments, observability, and then layers in AI-driven development once that foundation is solid. Your engineers and lead architects leave knowing not just how to use AI tools, but where to apply them, how to measure whether they're working, and how to keep quality from slipping as automation increases."The AI portion of the Advanced .NET Bootcamp has been especially valuable. It's practical and grounded in real workflows, which matters in a fast-moving space where hype is everywhere." — Bootcamp AttendeeContact Us to Learn More About the Bootcamp → If your team is evaluating how to build AI into your software delivery process, not just into individual workflows, then our AI Software Factory demo is worth an hour of your time. Jeffrey walks through a live system: real work items, real automation, real delivery metrics update in real time. These Sessions are kept small and meant for actual conversation about your stack and your starting point. Schedule a Demo Session → The organizations winning with AI right now are not the ones who spent the most. They're the ones who treated AI as an end-to-end system problem, not a tooling problem, and those who built the delivery foundations first, trained and upskilled their people deliberately, and measured every automation decision against real outcomes. That's the work Clear Measure does. We don't sell AI tools. We help engineering teams build the system around them, one that can absorb and truly leverage AI intentionally, scale it responsibly, and show up in your delivery metrics.
winget install GitHub.Copilot.CLI or download from github.com/copilot)# Verify you're on Windows 11 or Server 2022
[System.Environment]::OSVersion.VersionString
# You should see something like:
# Microsoft Windows NT 10.0.26100.0
# Requires Administrator privileges
# Run as Administrator!
# Create a 200 GB DevDrive on D: with label "DevDrive-Repos"
New-DevDrive -Drive D: -Size 200GB -Name "DevDrive-Repos"
# Verify creation
Get-Volume | Where-Object { $_.FileSystem -eq "DevFS" }
# You should see your new DevDrive listed
⏱️ Time: 2-5 minutes for DevDrive initialization
copilot
YourUsername with your Windows usernameD:\ with your DevDrive path (if using a different letter)🔍 Analyzing your setup...
📋 Creating implementation plan...
PLAN CREATED:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Problem: Migrate repositories (~85 GB) from C:\Users\YourUsername\source\repos
to DevDrive (D:\) while maintaining backward compatibility
Approach:
1. Create a symbolic link at C:\Users\YourUsername\source\repos → D:\
2. Move all repositories from C:\Users\YourUsername\source\repos to D:\
3. Verify all paths resolve correctly
4. Test Git operations through the symbolic link
Strategy:
✓ Single directory link (more maintainable than individual links)
✓ Target layout: All repos directly under D:\ (D:\repo-name structure)
✓ Backward compatibility: Old paths continue working transparently
Ready to proceed? (Type 'start' to begin migration)
start to begin:
start
Copilot CLI will execute the migration in phases:
=== PHASE 1: PRE-MIGRATION VALIDATION ===
✓ Source directory exists
✓ DevDrive (D:\) exists
✓ Source size: 85.27 GB
✓ D:\ free space: 252.65 GB
✓ Sufficient space available
✓ Path is a regular folder (ready for conversion)
✓ No open handles detected
=== PHASE 1 COMPLETE ===
What it checks:
=== PHASE 2: MIGRATING REPOSITORIES ===
Found N repositories to migrate
Using Robocopy for efficient migration with resume capability...
Moving: project-api
✓ Success
Moving: framework-core
✓ Success
Moving: webapp-frontend
✓ Success
...
Moving: infrastructure-tools
✓ Success
=== PHASE 2 SUMMARY ===
Successfully migrated: N
Failed migrations: 0
=== PHASE 2 COMPLETE ===
📊 Migration Details:
=== PHASE 3: CREATING SYMBOLIC LINK ===
Creating symbolic link...
Source: C:\Users\YourUsername\source\repos
Target: D:\
✓ Symbolic link created successfully
✓ Symbolic link verified!
=== PHASE 3 COMPLETE ===
What happens:
mklink /J=== PHASE 4: VERIFICATION & TESTING ===
1. Testing path redirection...
Old path: C:\Users\YourUsername\source\repos
Points to: D:\
✓ Verified
2. Listing repositories via old path...
Found N repositories
✓ Path is accessible
3. Listing repositories via new path (D:\)...
Found N repositories
4. Verifying consistency...
✓ Repository counts match
5. Testing file access through old path...
Sample repo: sample-project
✓ Can access files
=== PHASE 4 COMPLETE ===
=== PHASE 5: TESTING TOOL COMPATIBILITY ===
Testing Git Integration...
Found Git repo: sample-project
Testing 'git status' via old path...
✓ Git commands work through symbolic link
Testing Absolute Path Access...
C:\Users\YourUsername\source\repos\sample-project
✓ Accessible
D:\sample-project
✓ Accessible
=== PHASE 5 COMPLETE ===
cd C:\Users\YourUsername\source\repos\sample-project
git status
code C:\Users\YourUsername\source\repos\sample-projectcd C:\Users\YourUsername\source\repos\my-dotnet-app
dotnet build
✅ All should work without any configuration changes.
Application Request
↓
C:\Users\YourUsername\source\repos
↓
[Windows Kernel: This is a junction to D:\]
↓
D:\
↓
Actual Files & Directories
| Operation | Before (C: SSD) | After (DevDrive) | Improvement |
|---|---|---|---|
| Git clone (2 GB repo) | 45 seconds | 28 seconds | ~38% faster |
| Git status (1000 files) | 2.3 seconds | 1.8 seconds | ~22% faster |
| dotnet build | 35 seconds | 26 seconds | ~26% faster |
| npm install | 18 seconds | 13 seconds | ~28% faster |
| File enumeration (500K) | 4.2 seconds | 2.8 seconds | ~33% faster |
$drive = Get-Volume -DriveLetter D
$freeGB = [math]::Round($drive.SizeRemaining / 1GB, 2)
Write-Host "Free space on D:\: $freeGB GB"
Solution: Close all applications accessing the repositories (IDEs, VSCode, file explorers, Git tools, antivirus) and retry.
copilotstart to executegit status# 1. Remove the symbolic link
rmdir "C:\Users\YourUsername\source\repos"
# 2. Move repositories back from D:\ (all data remains intact)
Hear directly from past attendees of Clear Measure's Advanced .NET Bootcamp — a 3-day, immersive in-person training taught by Jeffrey Palermo, designed for software engineers and architects who want to sharpen their skills and deliver better software, faster.
The bootcamp covers modern .NET architecture, DevOps practices, cloud transformation, application modernization, AI-driven development, and more — with hands-on exercises throughout each day. Ready to level up your team?
Learn more and enroll: https://clearmeasure.com/trainings/workshops/advanced-net-bootcamp/ Questions? Email us at info@clear-measure.com
Industry Veteran with 35-Year Career in Technology Services Joins to Lead New Business Sales and Market Expansion
Monday, April 13, 2026 — Clear Measure announced the appointment of Richard Sobota as Vice President of Strategic Growth. In this role, Sobota will lead the company's new business sales efforts, build new client relationships, and help drive Clear Measure's next phase of revenue growth. At Clear Measure, Rich will report to Jeffrey Palermo and focus on driving new client growth and expanding the company’s market reach.
Sobota brings more than 35 years of experience in technology services, digital transformation, and enterprise solution delivery. He has held senior leadership roles at Accenture, IBM, Capgemini, and Cognizant, where he built a strong track record of leading complex pursuits, developing strategic client relationships, and helping organizations modernize and grow through technology.
His experience spans cloud transformation, application modernization, data and artificial intelligence, enterprise platform deployments, and large-scale managed services engagements while delivering real results for many of the world's leading organizations across retail, consumer products, travel, and the public sector.
Before his industry career, Sobota graduated from the United States Air Force Academy and served as a systems engineer and intelligence officer. He is known for combining executive-level relationship building with a practical, customer-first understanding of how technology creates business value.
At Clear Measure, Sobota will lead sales and help more clients understand the value the company brings through custom software, modern engineering practices, and business-focused technology solutions.
Contact: Richard Sobota richard.sobota@clear-measure.comOne of our clients was spending five full days on every manual deployment — not because they lacked talent, but because their Octopus Deploy environment had never been properly assessed or updated since it was first stood up years earlier. Tentacles (Octopus's deployment agents) had accumulated. Integrations had drifted. And nobody had stopped long enough to ask whether any of it still made sense. If that sounds familiar, you're not alone. Aging, unexamined Octopus Deploy environments are one of the most consistent problems we see across enterprise teams in healthcare, financial services, insurance, energy, and beyond.
Here's how we help teams fix that — and what's possible on the other side.
Phase I: Assessing Your Octopus Deploy Environment Before Migration When a client comes to us with an aging Octopus Deploy environment, we don't start by recommending an immediate upgrade. We start by looking carefully at what is already there. Our Octopus Deploy Migration Planning engagement — designed specifically for complex environments — gives teams a complete picture before a single migration step is taken. This includes:Following Phases I and II, you'll have something most teams never start with: a validated migration pattern, a target environment that's been proven to work, and a team that has already done this once successfully. That changes everything about how the remaining portfolio gets migrated.
At that point, we'll present a full estimate for converting the remainder of your applications to Octopus Deploy. From there, you have options. Our team can execute the full migration on your behalf, or your team can take on the work directly with our experts advising alongside them. What doesn't change regardless of the path: your team will understand exactly how the work is done, with detailed documentation and hands-on advisement until they're fully confident managing the environment on their own.
Octopus Deploy Training: Up-Leveling Your Team for Long-Term SuccessDeploying a new version of Octopus is only as valuable as your team's ability to use it well. That's why team enablement is built into every Clear Measure engagement — not treated as an afterthought.
We meet teams where they are. If your team is new to Octopus or needs to reset some ingrained habits, a focused 2-hour orientation gets everyone aligned quickly. For organizations ready to go deeper, a full day of platform engineering planning helps connect architecture decisions to long-term delivery goals. And for enterprise teams looking to build advanced expertise and deploy at scale, we offer pro-level training customized to your environment and roadmap.
As Clear Measure Chief Architect Jeffrey Palermo puts it:"There isn't anything Octopus can't deploy. But if automated DevOps is new to your team, make sure to plan your platform engineering properly. Empower your team to establish quality, achieve stability, and increase speed of delivery."Learn more about how we work with Octopus Deploy across industries and team sizes. What Our Clients Are Saying
"It was taking our team five days to do a proper manual deployment, so I decided it was time to move automation to the next level. By increasing the utilization of Octopus Deploys automation features from 10% to 80%, the company has increased productivity by over 84%. Now we have more efficiency and accuracy. It's a completely different deployment experience." — SVP of Operations, Alphapoint "Our old method of deployments was cumbersome on our IT team, and required significant time and stress. Clear Measure helped us set up an Octopus Deploy configuration that allows us to initiate mid-day deployments, saving time we would have normally spent after-hours to do a deployment." — FrontierHow Octopus Deploy Fits Into a Modern AI DevOps Architecture Upgrading Octopus Deploy is one piece of a larger picture. At Clear Measure, we view deployment tooling as a core component of a modern AI DevOps architecture: the interconnected system of pipelines, automation, feedback loops, and intelligence that allows engineering teams to deliver software reliably and rapidly.
When your Octopus Deploy instance is current, properly configured, and correctly integrated with your build servers and cloud environments, it becomes the foundation that makes AI-assisted delivery possible — automated validation, complete auditability, and the kind of deployment speed that lets engineers focus on architecture and innovation instead of firefighting. Without that foundation, even the best AI tooling has nothing reliable to build on.
To see what this looks like end-to-end, download our AI DevOps Architecture Poster — a print-ready reference designed for .NET and Azure engineering teams. Octopus Deploy Migration Results: Real Client Outcomes The numbers speak for themselves. In one engagement with a FinTech firm, new environments that previously took days to provision were up and running in 4 hours or less, features could be deployed on demand, and overall team productivity increased by 84%. Read the full case study: Optimized DevOps Roadmap to Deliver Faster Results In another engagement, a supply chain management company with over 200 employees eliminated tedious manual deployments entirely — gaining the flexibility to deploy mid-day without disruption, proactively catching errors before they reached production, and freeing their IT team from the after-hours grind that had become the norm. Read the full case study: Streamline Deployments and Reduce Cycle Time Both transformations started with the same foundational work — assessing what existed, planning the right path forward, and proving it out before scaling. The pattern holds across every industry we work in:If your team is running an older self-hosted Octopus Deploy instance and isn't sure where to start, the best first step is a clear-eyed look at what you actually have — an honest technical assessment that tells you where your environment stands, what risk is accumulating, and what a better future state looks like.
That's exactly what our Octopus Deploy Migration Planning engagement is designed to deliver. Explore our Octopus Deploy practice to learn more, or contact us to start the conversation.