TLDR: What started as “Simple Dev” has evolved into something bigger. With vibe coding and tools like the BC Code Intelligence MCP Server, consultants can tackle more complex development work – but only with rigorous governance and a fundamental shift in how functional and technical teams collaborate.
The question for me (as a consultant) isn’t whether consultants should ‘stay in their lane’ – it’s whether we can both win when good code is the outcome.
From Simple Dev to Vibe Coding
I’ve been thinking about this for a while, before “vibe coding” became a buzzword. A while back, I wrote about “Simple Dev” – the idea that consultants can and should understand the basics of AL coding. Not because we want to do complex development, but because there’s productive middle ground between Low-Code and Pro Dev.
The core belief: Consultants who understand the problem space deeply often know what needs to be built better than anyone. The question was always whether they could safely contribute to how it gets built.
Simple Dev: The Foundation
Simple Dev defined trivial AL work that developers were happy to see consultants handle:
- Page extensions for UI customization
- Simple reports with straightforward logic
- Field visibility and basic UX changes
- Low-risk changes that don’t touch database schema
The key insight? This work was always possible – but it needed “rules of the road”:
- Mandatory source control (DevOps, branching, pull requests)
- Object naming conventions and ID range discipline
- Code analyzers (CodeCop, PerTenantExtensionCop, UICop, LinterCop)
- Developer oversight and code review
The lesson learned: Governance makes consultant coding trustworthy, NOT restricting consultants from coding altogether.
The Game-Changer: Vibe Coding + MCP
Then everything changed.
Vibe coding AI-assisted development with tools like GitHub Copilot, didn’t just make Simple Dev faster. It expanded what’s possible for consultants to tackle:
- Complex page scripting with intricate logic
- API integrations with external systems
- Performance optimization for slow queries
- Data transformation and migration scripts
- Advanced error handling and validation
But here’s the problem: More power means more responsibility. The “rules of the road” that worked for Simple Dev aren’t enough anymore.
When AI can generate complex code in seconds, consultants face new challenges:
- Architecture matters from day one – AI might generate working code that creates technical debt
- Deeper knowledge required – Understanding why code works, not just that it works
- Quality gates must be rigorous – Performance, security, maintainability all need validation
- The stakes are higher – Complex code has more ways to fail subtly
The question: Can vibe coding + MCP knowledge bridge the gap? Can consultants move beyond Simple Dev while maintaining trust?
I believe the answer is yes, maybe, but only with the right governance, infrastructure, and methodology.
What is “Vibe Coding” for Business Central?
Vibe coding in BC context = AI-assisted development where you use tools like VS Code and GitHub Copilot Pro to generate AL code based on intent, patterns, and domain context rather than writing every line manually.
The distinction that matters:
- ❌ NOT: Copy-paste from AI without understanding (dangerous and irresponsible)
- ✅ IS: Leveraging AI as a development accelerator with proper validation and knowledge grounding
The Concerns Are Valid
Teams hiring consultants who use vibe coding have legitimate concerns:
- “Are they just letting AI do their job?” → Are they adding value or just prompting faster?
- “How do I know the code follows BC best practices?” → Does the AI understand SIFT, CalcSums, SetLoadFields?
- “Will this create maintenance nightmares?” → Can they explain and debug what AI generated?
- “Is the code optimized or just functional?” → Will it work with 100 records but crash with 10,000?
These aren’t “fear of AI” concerns, they’re software quality concerns that matter even more when AI generates code quickly.
The Bridge: BC Code Intelligence MCP Server
The BC Code Intelligence MCP Server demonstrates how vibe coding can potentially be trustworthy for consultants, not just fast.
Generic AI (ChatGPT, base GitHub Copilot) doesn’t know BC-specific performance patterns, AL coding guidelines, permission patterns, or upgrade impact considerations. Result: AI might generate code that compiles but violates BC best practices.
The MCP Solution: Systematic Knowledge Grounding
1. 14 Specialist Personas (Domain expertise > generic AI)
- 🏗️ Alex Architect, 💻 Sam Coder, 🔍 Dean Debug, 🤖 Casey Copilot
- 📝 Roger Reviewer, 🔒 Seth Security, 🧪 Quinn Tester, 🌉 Jordan Bridge
- Plus 6 more for UX, documentation, legacy code, errors, requirements, and mentoring
2. Layered Knowledge System (Transparency through override hierarchy)
- Embedded Layer → Company Layer → Team Layer → Project Layer
- You can see exactly what knowledge the AI is using. No black box.
3. 20+ MCP Tools for knowledge discovery, specialist engagement, and workflow orchestration
4. 9 Structured Workflow Pipelines with validation gates built-in
5. Context Preservation across specialist handoffs
What This Proves
The BC Code Intelligence MCP Server is proof of concept that vibe coding can adopt systematic governance:
- ✅ AI assistance CAN be structured (not just ad-hoc prompting)
- ✅ Knowledge grounding makes measurable difference (14,925% improvement with MCP-enhanced knowledge)
- ✅ Specialist-driven beats generic AI (domain expertise matters)
- ✅ Infrastructure enables verification (layered knowledge = transparency)
For consultants: This is the infrastructure that makes vibe coding trustworthy.
For teams: This is the approach you should look for when evaluating consultant AI work.
The Adapted Rules: Governance for Vibe Coding Era
Simple Dev’s “rules of the road” were necessary but not sufficient. Here’s what adapts for vibe coding:
Rule #1: AI-Generated Code = Same Standards
No exceptions. “I used AI” is not an excuse for failing code analyzers, wrong object IDs, performance issues, or security gaps. You remain accountable for ALL code you commit.
Rule #2: Architecture Review BEFORE Implementation
Before starting complex work:
- Engage architecture specialist (or pro dev) for design review
- Document key architectural decisions and tradeoffs
- Define success criteria (performance targets, scalability)
- Plan testing approach for realistic scenarios
Why: AI can generate technically correct code that creates architectural problems. Prevention is cheaper than refactoring.
Rule #3: Knowledge Grounding is Mandatory
Minimum requirements:
- BC best practices documentation accessible to AI
- Performance optimization patterns (SIFT, CalcSums, SetLoadFields)
- Security and permission guidelines
- Project-specific conventions
Ideal infrastructure:
- MCP server with BC knowledge base
- Specialist personas for domain-specific guidance
- Layered knowledge system showing what AI is using
Rule #4: Systematic Testing and Validation
Extended beyond “it compiles”:
- Performance validation with realistic data volumes
- Security validation (permissions, access control)
- Quality validation (code review, maintainability)
- False positive detection (test when AI suggestions are wrong)
Rule #5: Explainability and Documentation
If you can’t explain it, you don’t understand it well enough to ship it.
Consultant must be able to walk through generated code, justify decisions, debug issues, and document reasoning for future maintainers.
Rule #6: Continuous Measurement and Improvement
Track and share:
- Performance benchmarks (before/after metrics)
- Code quality scores
- False positive rate
- Development velocity
- Knowledge gaps discovered
Contribute back:
- Improve knowledge bases with discovered gaps
- Document when AI guidance was wrong and why
- Help others learn from your experiences
A New Collaboration Model
Traditional model: “Functional people stay in their lane. Developers do the coding.”
This model has merit clear separation, quality control, less risk. But there’s a cost, waiting for capacity, slower delivery, knowledge gaps between “what to build” and “how to build it.”
What if Both Sides Could Win?
For functional consultants:
- Implement UI and workflow changes directly (with proper governance)
- Tackle increasingly complex scenarios (with AI assistance and specialist guidance)
- Deliver faster without waiting for developer capacity
For technical teams:
- Focus on complex architecture and integration challenges (highest-value work)
- Set governance frameworks and review pull requests (ensure quality)
- Benefit from reduced UI and simple customization workload
For clients:
- Faster delivery, better solution quality, improved knowledge transfer
The Requirements
1. Functional consultants must invest in:
- BC development fundamentals (AL basics, performance patterns, security)
- Knowledge infrastructure (MCP servers, specialist access)
- Systematic validation practices (testing, code review, measurement)
2. Technical teams must invest in:
- Clear governance frameworks (rules of the road, not gatekeeping)
- Effective code review process (teaching, not just rejecting)
- Trust-building through transparency (measure outcomes, iterate together)
3. Both must commit to:
- Shared quality standards (no “good enough for consultant work”)
- Continuous improvement (retrospectives, learning from mistakes)
- Mutual respect (functional expertise + technical expertise = better solutions)
Practical Guidance
For Consultants: Follow the 6 Rules
The framework for trustworthy vibe coding is detailed in the 6 Adapted Rules above. These aren’t suggestions, they’re the minimum requirements for maintaining trust while using AI-assisted development.
For Technical Teams: Evaluating Consultant AI Work
Questions to ask consultants:
About infrastructure:
- “How do you ensure AI-generated code follows BC best practices?”
- “What knowledge sources inform your AI tools?” (Look for: BC-specific, not generic)
- “Do you use domain-specific AI assistance or just generic tools?”
About process:
- “Can you explain your testing and validation process?”
- “How do you measure code quality and performance?”
- “How do you handle handoffs between different expertise areas?”
About capability:
- “Can you walk me through how AI assisted on this specific code?”
- “What happens if AI generates suboptimal code?”
- “Can you show examples of when you rejected AI suggestions?”
Behaviours to encourage:
- ✅ Uses layered knowledge approach with transparent sources
- ✅ Follows specialist-driven workflows with clear validation checkpoints
- ✅ Shares performance benchmarks and measurable quality metrics
- ✅ Openly discusses AI limitations and manual interventions
- ✅ Actively contributes to BC community knowledge
- ✅ Embraces code review as collaborative learning
Questions for the BC Community
For teams that have empowered consultants:
- How do you evaluate AI-assisted consultant work?
- What trust-building practices work for your team?
- Where have you seen this collaboration model succeed or fail?
For fellow consultants:
- What knowledge engineering approaches are you testing?
- How are you measuring AI-generated code quality?
- Where have you found the boundaries of what consultants should tackle?
For pro AL developers:
- What concerns you most about consultants using vibe coding?
- What would make you confident in consultant-generated code?
- How can we build better collaboration models?
Resources & Next Steps
BC Code Intelligence MCP Server: GitHub Repository
- 14 specialists, 20+ tools, systematic workflows
- Free, open-source, community-driven
Jeremy Vyska’s Knowledge Engineering Testing: “What Actually Works”
- 14,925% improvement with MCP-enhanced knowledge delivery
- Proof that knowledge delivery method matters
Original Simple Dev Concept: “Exploring the Middle Ground” (April 2024)
- Foundation concepts that led to current thinking
For Consultants:
- Evaluate your current AI assistance infrastructure
- Test BC Code Intelligence MCP or build your own knowledge framework
- Measure outcomes and share learnings
- Start with Simple Dev scope, expand as you validate quality
For Technical Teams:
- Define your governance frameworks explicitly
- Review the consultant evaluation questions in this post
- Consider how functional colleagues doing more development could benefit your team
- Look for measurable validation, not just speed claims
Let’s Build Trust Together
The question “Can consultants be trusted to vibe code?” deserves careful thought.
Possibly Yes – when they:
- Invest in BC-specific knowledge infrastructure
- Follow rigorous governance and validation frameworks
- Work collaboratively with technical teams
- Measure outcomes and maintain transparency
Perhaps No – when they:
- Use generic AI without BC knowledge grounding
- Skip validation and testing for speed
- Can’t explain or debug generated code
- Ignore governance and best practices
The choice is ours. We can build collaborative models where functional and technical teams both win, or we can default to “stay in your lane” because we weren’t rigorous enough to earn trust.
What the BC community does very well is share knowledge, systematic approaches, best practices, and resources for continuous improvement. Vibe coding is just the next challenge we can tackle together.
I’d love to hear your experiences with AI-assisted BC development—both success stories and cautionary tales.
Category: AI & Automation, Business Central, Community & Culture
Tags: Vibe Coding, AI, Consulting, GitHub Copilot, Business Central, Trust, Knowledge Engineering, Developer Productivity, Best Practices, Simple Dev, MCP, Collaboration

Leave a Reply