I sat and read up on AI adoption information from Microsoft. One of the elements they recommend is to establish an AI Council, an interdisciplinary body that monitors and guides the development, implementation, and evaluation of AI solutions in the organization. I thought, what about if we tried to build such an AI council with Copilot Studio agents instead of busy people from the organization?

Microsoft 365 Copilot
So then I thought that Microsoft 365 Copilot and I could start planning such an AI Council.
I asked Microsoft 365 Copilot:
If we were to set up an AI Council here with us in Arild Aarnes Photography according to Microsoft’s specification of such a council. What roles would such a council have, and what would be the individual’s expertise?
Microsoft 365 Copilot gave me a detailed answer that included a description of each role, among other things:
Suggested roles in an AI Council
A mature but pragmatic council can consist of 6-9 roles, some of which can be covered by the same person at the start.
1. Chair/Head of the AI Council
(often from leadership or transformation)
Responsibility
- Setting direction and priorities
- Connecting AI initiatives with our company’s strategy
- Owns the decision-making framework
- Escalation point for difficult cases
Expertise
- Business understanding
- Change management
- Strategy and maturity
- Good understanding (but not deep coding) of the AI landscape
👉 Typically: business area manager, subject manager AI, or strategy manager
2. AI/GenAI Technical Architect
(Microsoft and platform-heavy role)
Responsibility
- Reference architecture for:
- Azure OpenAI
- Copilot Studio
- Data platforms
- Secure integration
- Assess technical risks, maturity and limitations
- Secure secure-by-design
Expertise
- Azure AI, Azure OpenAI, M365 Copilot
- Identity (Entra ID)
- Data platforms
- APIs and integrations
- Cost management
3. Responsible AI / Ethics Manager
(often underestimated – but extremely important)
Responsibility
- Ensure compliance with:
- Responsible AI principles
- Transparency
- Explainability
- Bias and abuse management
- Guidance in customer cases
- Contribute to risk assessments
Expertise
- Responsible AI
- Ethical frameworks
- Corporate Social Responsibility
- AI Act (EU)
- Practical application, not just theory
👉 Can be a person with a background in IT architecture, law, or social sciences + AI interest
4. Security & Compliance Lead
(critical in Copilot and GenAI scenarios)
Responsibility
- Data protection
- Information security
- Threat modeling
- Advice on the safe use of GenAI
- Customer Trust
Expertise
- Zero Trust
- Data Loss Prevention (DLP)
- Purview
- Security architecture
- Risk analyses
👉 Often a security architect or CISO-like role
5. Privacy/Legal/Risk Advisor
(can be combined with safety at the start)
Responsibility
- GDPR Considerations
- Privacy in training and use
- Customer contracts
- Responsibilities in the use of AI
Expertise
- Privacy Policy
- Legal framework around AI
- Interaction between law and technology
👉 Must not be a lawyer, but must know the right questions
6. Business Value / Use Case Owner
(to avoid “tech for tech’s sake”)
Responsibility
- Prioritize AI initiatives by impact
- Ensure actual valuation
- Assessing customer maturity
- Measure impact (KPIs)
Expertise
- Domain Insights
- Process improvement
- Change management
- Benefit realization
👉 Especially important in consultant deliveries
7. Adoption & Change Lead
(often missed in AI projects)
Responsibility
- How humans actually use AI
- Training
- Communication
- Organizational anchoring
Expertise
- Change management
- Competence development
- User adoption (especially M365 Copilot)
- Cultural understanding
8. Practicing AI Engineer/Data Scientist
(reality orientation)
Responsibility
- Bring “ground truth” from projects
- Notify of technical limitations
- Sharing experiences
Expertise
- Prompting
- RAG
- Model limitations
- Errors, hallucinations, performance
👉 Very useful to avoid powerpoint AI
Then I took it further to see if we could turn this into a multi-agent solution with AI agents instead of human participants.
So I asked Microsoft 365 Copilot:
I’m considering testing the possibility of building this council as separate agents in a multi-agent setup with Copilot Studio.
Microsoft 365 Copilot responds in detail and provides, among other things, a good overview of which agents such a solution can consist of, here is the answer from Microsoft 365 Copilot:
This is a very mature and completely correct thought – and Copilot Studio is actually surprisingly well suited to an AI Council as a multi-agent system. What you describe directly hits both Microsoft’s direction for Copilot Studio (agents + orchestrators) and the concept of “AI as an organizational capability”.
Below, I go from concept to concrete architecture, with a focus on:
- How Microsoft envisions multi-agent patterns
- How each AI Council member can be modeled as an agent
- Which Agent Types You Actually Need
- A proposed interaction/decision-making flow
- Practical limitations and pitfalls of Copilot Studio (important)
Proposed agents (1:1 with council roles)
Role
- “Secretariat of the Council”
- Controls the dialogue between the agents
- Compiling answers
Prompt responsibility
- Understand the case/case
- Calling the right agents
- Weights input
- Identifying:
- Disagreement
- Risk
- Lack of information
Should NOT
- Give specialist advice yourself
- Override the Agents’ Domain
👉This is the key to ensuring that this does not become chaotic
Mandate
Ensure that the proposed solution is in line with Responsible AI principles, the AI Act and ethical considerations.
Expertise coded in prompt
- Microsoft Responsible AI Standard
- Fairness
- Transparency
- Human oversight
- Accountability
Typical Considerations
- Who can be negatively affected?
- Are decisions explainable?
- Is there a risk of bias or abuse?
- Human-in-the-loop?
Mandate
Assess security, data flow, access management, and privacy.
Expertise
- Zero Trust
- M365 / Azure security
- Purview
- DLP
- GDPR
Typical answers
- Data exposed to the model
- Risks of prompting
- Logging and auditing
- Recommended safety measures
Mandate
Ensure technically sound, supported and scalable architecture.
Expertise
- Azure OpenAI, Copilot Studio
- RAG
- API limitations
- Token-cost
- Performance
Typical Considerations
- Is Copilot Studio the right tool?
- Do we need RAG or fine tuning?
- Limitations in model selection?
- Overengineering?
Mandate
Ensure that there is documented business value and maturity.
Expertise
- Use case design
- KPIs
- Process improvement
- Change cost
Typical questions
- What is actual gain?
- Is the ripeness sufficient?
- Are there simpler alternatives?
- Who owns the value?
Used if the focus is M365 Copilot and knowledge work.
Mandate
Anticipate user adoption and organizational friction.
Furthermore, I asked Microsoft 365 Copilot to create system prompts/instructions for all suggested agents. Here’s what I got back:
You are the AI Council Orchestrator for our AI Council.
Your role is to coordinate a virtual AI Council consisting of specialized expert agents.
You do NOT provide expert opinions yourself.
Your responsibilities:
– Understand the case or question presented.
– Identify which expert agents must be consulted.
– Collect and consolidate their responses.
– Detect disagreements, risks, and missing information.
– Form a structured, neutral council recommendation.
You MUST:
– Treat all agent inputs as advisory.
– Highlight uncertainty or conflicting advice.
– Avoid overriding any agent’s domain expertise.
You MUST NOT:
– Provide technical, legal, ethical, or security advice yourself.
– Make final or authoritative decisions.
Your final output MUST follow this structure:
AI Council Consolidated Recommendation:
– Case summary (1–2 sentences)
– Input sources consulted
– Key risks identified
– Areas of agreement
– Areas of disagreement or uncertainty
– Overall recommendation: Go / Conditional Go / No-Go
– Required actions before proceeding
– Human owner / escalation point
If critical information is missing, state this explicitly.
You are the Responsible AI & Ethics Agent in our AI Council.
Your mandate is to assess ethical, societal, and responsible AI risks based on established principles, including Microsoft Responsible AI and EU AI Act considerations.
You MUST focus exclusively on:
– Fairness and bias
– Transparency and explainability
– Accountability and human oversight
– Potential misuse or unintended consequences
– Impact on individuals or groups
You MUST NOT:
– Assess technical architecture, security controls, or business value.
– Make legal determinations beyond ethical risk framing.
You MUST clearly state assumptions if information is missing.
Your response MUST follow this format:
Responsible AI Assessment:
– Risk level: Low/Medium/High
– Affected stakeholders
– Key ethical risks
– Transparency and explainability concerns
– Human-in-the-loop considerations
– Required mitigations
– Ethics-based Go / Conditional Go / No-Go recommendation
You are the Security & Privacy Agent in our AI Council.
Your mandate is to assess information security, data protection, and privacy risks related to the proposed AI solution.
You MUST focus exclusively on:
– Data classification and data flows
– Identity and access management
– Exposure risk to AI models
– GDPR and privacy-by-design risks
– Logging, auditability, and monitoring
You MUST NOT:
– Assess ethics outside privacy implications.
– Evaluate business value or organizational adoption.
– Design full architectures beyond risk assessment.
If assumptions are made due to missing data, state them clearly.
Your response MUST follow this format:
Security & Privacy Assessment:
– Risk level: Low/Medium/High
– Data involved (types and sensitivity)
– Primary security risks
– Privacy and compliance concerns
– Required controls and mitigations
– Security-based Go / Conditional Go / No-Go recommendation
You are the Technical Architecture Agent in our AI Council.
Your mandate is to assess whether the proposed AI solution is technically sound, supportable, scalable, and aligned with Microsoft platform best practices.
You MUST focus exclusively on:
– Architecture feasibility
– Platform and service choice (e.g. Copilot Studio, Azure OpenAI)
– Integration, performance, and scalability
– Technical limitations and risks
– Operational complexity and cost drivers
You MUST NOT:
– Assess ethics, security policy, or legal compliance.
– Argue business priority or financial ROI.
You MUST be explicit about tradeoffs and constraints.
Your response MUST follow this format:
Technical Architecture Assessment:
– Technical feasibility: High / Medium / Low
– Key architectural assumptions
– Primary technical risks or limits
– Complexity assessment
– Alternative approaches (if relevant)
– Technical Go / Conditional Go / No-Go recommendation
You are the Business Value & Use Case Agent in our AI Council.
Your mandate is to assess whether the proposed AI initiative has clear business value, sufficient maturity, and ownership.
You MUST focus exclusively on:
– Use case clarity
– Value hypothesis and expected outcomes
– Organizational readiness
– Ownership and accountability
– Risk of overengineering vs simpler solutions
You MUST NOT:
– Assess technical implementation details.
– Evaluate security or legal compliance.
If business value is unclear, you MUST state this directly.
Your response MUST follow this format:
Business Value Assessment:
– Use case clarity: Clear / Partial / Unclear
– Expected business value
– Value owner and success metrics
– Organizational readiness
– Key business risks
– Business-based Go / Conditional Go / No-Go recommendation
You are the Adoption & Change Agent in our AI Council.
Your mandate is to assess human, cultural, and organizational factors affecting successful adoption of the AI solution.
You MUST focus exclusively on:
– End-user impact
– Change readiness and training needs
– Trust and acceptance risks
– Operational workflow changes
You MUST NOT:
– Assess technical architecture or security.
– Calculate ROI or business priority.
If adoption risks are high, you must clearly articulate why.
Your response MUST follow this format:
Adoption & Change Assessment:
– Adoption risk: Low/Medium/High
– Affected user groups
– Required change and training effort
– Resistance or trust risks
– Mitigation actions
– Adoption-based Go / Conditional Go / No-Go recommendation
I then asked Microsoft 365 Copilot what description I should give each agent when I add them as a connected agent to the orchestration agent, as well as what sources of knowledge each agent should have. Here is what I got from Microsoft 365 Copilot:
| Agent | Description | Sources of knowledge |
| Responsible AI & Ethics Agent | Assesses ethical, societal, and Responsible AI risks related to AI use cases. Provides guidance on fairness, transparency, human oversight, potential misuse, and alignment with Microsoft Responsible AI principles and EU AI Act considerations. Does not evaluate technical architecture, security controls, or business value. | Highly recommended to load as documents/links Microsoft Responsible AI Standard (PDF) Microsoft Responsible AI Principles EU AI Act – overall summary Internal Code of Conduct at our company (if any) Case descriptions of problematic AI use ⚠️ Important: keep the knowledge base normative, not technical. |
| Security & Privacy Agent | Evaluates security, privacy, and compliance risks for AI solutions. Focuses on data sensitivity, identity and access management, data exposure risks, GDPR considerations, and required security controls. Does not assess ethics beyond privacy, business value, or organizational adoption. | Microsoft Zero Trust guidance Microsoft Purview documentation Microsoft Copilot data protection documentation GDPR (brief interpretation, not full legal text) Your company’s security guidelines (if available) Examples of DLPbrud / lessons learned ➡️ This is an agent who benefits from specific policies. |
| Technical Architecture Agent | Reviews the technical feasibility and architecture of AI solutions. Assesses platform choice, scalability, operational complexity, integration options, and technical risks, with a focus on Microsoft technologies such as Copilot Studio and Azure OpenAI. Does not assess ethics, security policy, or business priority. | Azure OpenAI documentation Copilot Studio documentation Microsoft Cloud Adoption Framework for AI Reference architectures (PDF/diagrams) Known limitations / quotas / cost models Internal technical guidelines 💡 Bonus: Attach architecture templates. |
| Business Value & Use Case Agent | Assesses whether an AI use case has clear business value, ownership, and organizational readiness. Focuses on use case clarity, expected outcomes, value realization, and risk of overengineering. Does not evaluate technical implementation, security, or compliance. | AI‑use case canvas AI maturity models Examples of successful/unsuccessful AI projects Benefit realization templates Cost of change vs value estimates ➡️ This agent is your antiPOCtrap. |
| Adoption & Change Agent (recommended) | Evaluates human and organizational factors affecting AI adoption. Focuses on user impact, change readiness, training needs, trust, and workflow disruption. Does not assess technical architecture, security controls, or financial ROI. | Microsoft Copilot adoption guides · Change management frameworks · Adoption success stories · Training and competence plans · Internal culture or maturity analysis |
| AI Council Orchestrator | Coordinates specialized AI Council agents to produce a consolidated, structured recommendation. Does not provide expert assessments itself, but synthesizes agent inputs, identifies risks, gaps, and disagreements, and outputs a neutral council recommendation. | Council mandate / charter Output Templates Escalation rules Examples of previous advice ⚠️ Avoid giving this agent deep subject matter – it should think structure, not meaning. |
Let’s create the agents
Then it’s time to test this in practice.
The image shows the goal of this solution, the AI council agent becomes the main agent.

I first create the orchestration agent, it should not answer questions itself, only compile and summarize the response from the other specialized agents.
I then create the other four agents and provide them with instructions and knowledge sources based on the information I got from Microsoft 365 Copilot. I create the agents as full agents, not child agents, then I have full control and different teams/developers can change an agent and publish a new version independently of the other agents.
All agents receive instructions and knowledge sources as described by Microsoft 365 Copilot earlier.

In order for the orchestration agent to be able to connect to these four agents in this solution, they must have enabled that option in Settings:

In the AI Council agent, our orchestration agent, I then connect with the other four agents and give them a description given to me by Microsoft 365 Copilot.
Here’s an example of such a description from the Technical Architecture agent:
Reviews the technical feasibility and architecture of AI solutions. Assesses platform choice, scalability, operational complexity, integration options, and technical risks, with a focus on Microsoft technologies such as Copilot Studio and Azure OpenAI. Does not assess ethics, security policy, or business priority.
To connect the agents, go to the Agents tab and click the “Add an agent” button.
You will then find the four agents and can connect them one by one.

If you can’t find your agents, or they can’t be selected, you must first publish the agent before you can connect them to the orchestration agent.

Once we have added all four agents, it will look like the above image, and we can now add instructions on the AI Council.
Here are the instructions I test with:
You are the AI Council Orchestrator.
Your role is to coordinate a virtual AI Council consisting of specialized expert agents. You do NOT provide expert opinions yourself.
Your responsibilities:
1. use Responsible AI and act according to their instructions
2. use Security and Privacy and act according to their instructions
3. use Technical Architecture and act according to their instructions
4. use Business Value and act according to their instructions
– Collect and consolidate their responses.
– Detect disagreements, risks, and missing information.
– Form a structured, neutral council recommendation.
You MUST NOT:
– Provide technical, legal, ethical, or security advice yourself.
– Make final or authoritative decisions.
Your final output MUST follow this structure:
AI Council Consolidated Recommendation:
– Case summary (1–2 sentences)
– Input sources consulted
– Key risks identified
– Areas of agreement
– Areas of disagreement or uncertainty
– Overall recommendation:
Go / Conditional Go / No-Go
– Required actions before proceeding
– Human owner / escalation point If critical information is missing, state this explicitly.
These instructions tell the agent to ask all four other agents, compile the answers, and not answer anything by it self.
So then we are ready for testing. First, I asked Microsoft 365 Copilot for a template to send an application for a AI solution to the AI council, it went like this:
Instructions for the template:
Fill in all fields. Type Unknown where information is not available. Don’t delete headings or structure.
1️⃣ Basic Information
CaseID:
[e.g. AICXXX]
Initiator:
[Name/Entity/Customer]
Case Type:
[Internal Use| Customer Delivery | Pilot | Production | Other]
Short summary (max 3 sentences):
[What is being considered, and why now?]
2️⃣ Purpose and Goals
Primary purpose of the AI solution:
[Streamlining | Quality improvement | Decision support | Automation | Insights | Other]
What problem is solved without AI today?
[Describe current process]
What will be better with AI – concrete and measurable?
[Time, quality, risk, user experience, etc.]
3️⃣ Description of proposed solution
AI Type:
[Generative AI | Classic ML | Rules-based + AI | Unknown]
Suggested platform:
[Copilot for M365 | Copilot Studio | Azure OpenAI | To be determined]
Short solution outline:
[How is AI used in practice?]
Degree of Automation:
[Support Only | Partially automated | Fully automated | Not clarified]
4️⃣ Users and Influence
Primary user groups (roles):
[e.g., case managers, managers, customer service]
Number of users (estimate):
[<10 | 10-50 | 50-200 | 200+]
Impact on workday:
[Low | Medium | High]
5️⃣ Data and Information
Data Types Used:
[Internal Documents | Case documents | Personal data | Customer Data | Other]
Data classification:
[Open | Internal | Limited / personal data | Unknown]
Data sources:
[SharePoint, Dataverse, API, manual input, other]
6️⃣ Risk and maturity (own assessment)
Known Risks or Concerns:
[Technical, Legal, Ethical, Organizational]
Organizational AI maturity:
[Low | Medium | High]
Any skeptical stakeholders:
[Employee representatives, employees, management, customers, none]
7️⃣ Expected Value and Ownership
Expected benefit:
[Expected effect – assumed or measured]
Who owns the value? (role):
[e.g., department head, process owner]
How is success measured?
[KPI, Observation, Feedback, Not Defined]
8️⃣ Delimitations and prerequisites
What is explicitly not part of the solution?
[Important for Risk Assessment]
Time Frame:
[Experiment | Pilot | Production]
9️⃣ Questions for AI Council
What is a recommendation wanted?
[Exact question the council will answer]

To continue I asked Microsoft 365 Copilot for a prefilled template I could use to test the multi-agent system.
And a completed example of a completed application/question to the AI Council can look like this:
1️⃣ Basic Information
CaseIDAIC001SENS
Initiator
Arild Aarnes Photography – Microsoft & Digital Workplace
Type of caseCustomer deliveryPilot / Experiment
Short summary (max 3 sentences)
The customer wants to use Microsoft Copilot for Microsoft 365 for case managers and subject specialists to streamline document work and improve the basis for decision-making. It is also being considered to establish a simple Copilot Studio Agent for support in case preparation based on internal guidelines.
2️⃣ Purpose and Goals
Primary purpose of the AI solution☑ Streamlining☑ Quality improvement☑ Decision support
What problem is solved without AI today?
Case officers spend considerable time reading, summarising and interpreting documents, drafting decisions and following up emails and meeting minutes manually. Quality and consistency vary between employees.
What will be better with AI – specifically?
Faster drafting of documents and summaries, better reuse of existing knowledge and more consistent assessments in case preparation.
3️⃣ Description of proposed solution
AItype☑ Generative AI (text)
Suggested platform☑ Copilot for Microsoft 365☑ Copilot Studio
Brief Solution Blueprint
Copilot for Microsoft 365 is used in Word, Outlook, and Teams for everyday writing and summarizing work. A Copilot Studio Agent is considered to answer questions about internal policies and support in case preparation using documents stored in SharePoint.
Degree of Automation☑ Only Support/Suggestion☐ Partially Automated☐ Fully Automated
4️⃣ Users and Influence
Primary user groups
- Case officers
- Specialists
- Managers with decision-making responsibility
Number of users (estimate)
☑ 50-200
Impact on working life☑ High
5️⃣ Data and Information
Types of data used☑ Internal documents☑ Case documents☑ Personal data
Data classification☑ Unknown
(Date and level not yet mapped)
Data sources
- SharePoint Online
- Document libraries related to case areas
- Meeting documentation and minutes
6️⃣ Risk and maturity (own assessment)
Known risks or concerns
- Varying quality of existing documents
- Lack of overview of which documents contain personal data
- Concerns from union representatives related to AI use
Maturity in the organisation☑ Medium – some experience with digitalisation, limited practical use of AI
Any stakeholders who are scepticalUnion representatives and some professional communities
7️⃣ Expected Value and Ownership
Expected benefitsEstimated 10–20% reduction in time spent on documentation and reporting, as well as better quality and consistency in written deliveries.
Who owns the value?
Head of Department for Case Management
How do we know if this is successful?
Reduced time spent per case, user-perceived benefit and qualitative assessment from management. KPIs have not been definitively defined.
8️⃣ Delimitations and prerequisites
What is explicitly not part of the solution?
- No automated decisions
- No use of AI for legal final decision
- No fine-tuning of models
Time Frame☑: Pilot (2–6 months)
9️⃣ Questions for AI Council
What do you want a recommendation for?
Should Arild Aarnes Photography recommend moving forward with this Copilot-based solution now, and if so, under what conditions related to management, risk and implementation?

The AI Council agent will send the question to all four agents, consolidate the answers, and provide you with the following example of an answer:
The AI Council’s consolidated recommendation:
Case summary
The proposal is for a pilot to use Microsoft Copilot for Microsoft 365 and a Copilot Studio agent to support case managers and subject matter specialists with document management, summarization, and decision support, with the goal of increased efficiency and quality.
Sources that have been reviewed
- Business value
- Responsible AI
- Security and privacy
- Technical architecture
Key risks identified
- Unclarified data classification and presence of personal data (GDPR risk)
- Potential resistance and trust challenges from unions and stakeholders
- Risk of over-reliance on AI results without adequate human control
- Dependency on the Microsoft ecosystem and integration preparations
Areas of agreement
- The use case is clear and corresponds to the goals of efficiency and quality improvement
- Expected value: 10-20% time savings and better consistency
- Ownership has been identified: Head of Department for Case Management
- Human control (human-in-the-loop) is necessary and possible
- Technical feasibility is high given the Microsoft platform
Areas of disagreement or uncertainty
- KPIs and success criteria are not fully defined
- Data governance and access controls must be clarified before start-up
- Organizational maturity is medium; Need for change management and training
Overall recommendation:
Conditional Go – Proceed with the pilot under clear conditions.
Required measures before start-up
- Complete data classification and verify GDPR compliance
- Define clear KPIs and success goals for the pilot
- Establish governance mechanisms for AI use, including human control and transparency
- Address stakeholder concerns through communication and training
- Secure access controls and identity governance for SharePoint and Copilot integration
Responsible owner / escalation pointDepartment manager for case management (value provider) with escalation to the Digital Workplace management.
Summary
This was just a small experiment to test multi-agent capabilities with Copilot Studio. And maybe we shouldn’t replace an entire AI council with AI agents, but maybe the AI council can use such agents as sparring partners when assessing cases.
As for the test itself with a multi-agent setup in Copilot Studio, it mostly worked according to plan. The biggest challenge was that often I got a response from one or more of the agents before the summarized answer came from the orchestration agent. This makes for an unuser-friendly solution and is hopefully something that will change.
Copilot Studio multi-agent setup requires an orchestration agent, which means that the individual agent does not know about each other, i.e. our “Responsible AI” agent cannot query directly to, for example, the “Technical Architecture” agent, but everything goes through the orchestration agent. This is fundamentally different from decentralized multi-agent systems, where agents know each other and can communicate directly in a peer-to-peer network.

References
AI Adoption – Cloud Adoption Framework | Microsoft Learn
Add other agents overview – Microsoft Copilot Studio | Microsoft Learn
One thought on “How to Build a Multi-Agent AI Council with Copilot Studio: Practical Guide and Lessons Learned”