Jeg satt og lest meg litt opp på AI adoption informasjon fra Microsoft. Et av elementene de anbefaler er å etablerer et AI-råd, et tverrfaglig organ som overvåker og veileder utvikling, implementering og evaluering av AI-løsninger i organisasjonen. Jeg tenkte hva med hvis vi prøvde å bygge et slikt AI-råd med Copilot Studio agenter i stedet for travle mennesker fra organisasjonen?

Microsoft 365 Copilot
Så da tenkte jeg at jeg og Microsoft 365 Copilot kunne gå i gang med å planlegge et slikt AI-Råd.
Jeg spurte Microsoft 365 Copilot:
Hvis vi skulle satt opp et AI Council råd her hos oss etter Microsofts spesifikasjon av et slikt råd. Hvilke roller ville et slikt råd hatt, og hva ville vært den enkeltes ekspertise?
Microsoft 365 Copilot ga meg et detaljer svar som blant annet inkluderte en beskrivelse av hver rolle:
Foreslåtte roller i et AI Council
A mature but pragmatic council can consist of 6-9 roles, some of which can be covered by the same person at the start.
1. Chair/Head of the AI Council
(often from leadership or transformation)
Responsibility
- Setting direction and priorities
- Connecting AI initiatives with our company’s strategy
- Owns the decision-making framework
- Escalation point for difficult cases
Ekspertise
- Business understanding
- Change management
- Strategy and maturity
- Good understanding (but not deep coding) of the AI landscape
👉 Typically: business area manager, subject manager AI, or strategy manager
2. AI/GenAI Technical Architect
(Microsoft and platform-heavy role)
Responsibility
- Reference architecture for:
- Azure OpenAI
- Copilot Studio
- Data platforms
- Secure integration
- Assess technical risks, maturity and limitations
- Secure secure-by-design
Ekspertise
- Azure AI, Azure OpenAI, M365 Copilot
- Identity (Entra ID)
- Data platforms
- APIs and integrations
- Cost management
3. Responsible AI / Ethics Manager
(often underestimated – but extremely important)
Responsibility
- Ensure compliance with:
- Responsible AI principles
- Transparency
- Explainability
- Bias and abuse management
- Guidance in customer cases
- Contribute to risk assessments
Ekspertise
- Responsible AI
- Ethical frameworks
- Corporate Social Responsibility
- AI Act (EU)
- Practical application, not just theory
👉 Can be a person with a background in IT architecture, law, or social sciences + AI interest
4. Security & Compliance Lead
(critical in Copilot and GenAI scenarios)
Responsibility
- Data protection
- Information security
- Threat modeling
- Advice on the safe use of GenAI
- Customer Trust
Ekspertise
- Zero Trust
- Data Loss Prevention (DLP)
- Purview
- Security architecture
- Risk analyses
👉 Often a security architect or CISO-like role
5. Privacy/Legal/Risk Advisor
(can be combined with safety at the start)
Responsibility
- GDPR Considerations
- Privacy in training and use
- Customer contracts
- Responsibilities in the use of AI
Ekspertise
- Privacy Policy
- Legal framework around AI
- Interaction between law and technology
👉 Must not be a lawyer, but must know the right questions
6. Business Value / Use Case Owner
(to avoid “tech for tech’s sake”)
Responsibility
- Prioritize AI initiatives by impact
- Ensure actual valuation
- Assessing customer maturity
- Measure impact (KPIs)
Ekspertise
- Domain Insights
- Prosessforbedring
- Change management
- Benefit realization
👉 Especially important in consultant deliveries
7. Adoption & Change Lead
(often missed in AI projects)
Responsibility
- How humans actually use AI
- Training
- Communication
- Organizational anchoring
Ekspertise
- Change management
- Competence development
- User adoption (especially M365 Copilot)
- Cultural understanding
8. Practicing AI Engineer/Data Scientist
(reality orientation)
Responsibility
- Bring “ground truth” from projects
- Notify of technical limitations
- Sharing experiences
Ekspertise
- Prompting
- RAG
- Model limitations
- Errors, hallucinations, performance
👉 Very useful to avoid powerpoint AI
Så tok jeg det videre for å se om vi kunne gjøre dette om til en multi-agent løsning med AI-agenter i stedet for menneskelige deltagere.
Så jeg spurte Microsoft 365 Copilot:
Jeg vurderer å teste muligheten for å bygge dette rådet som separate agenter i et multiagent oppsett med Copilot Studio.
Microsoft 365 Copilot svarer igjen detaljert og gir blant annet en god oversikt over hvilke agenter en slik løsning kan bestå av:
This is a Dette er en svært moden og helt riktig tanke – og Copilot Studio er faktisk overraskende godt egnet til akkurat en AI Council som fleragent system. Det du beskriver treffer direkte både Microsofts retning for Copilot Studio (agents + orchestrators) og konseptet “AI as an organizational capability”.
Nedenfor går jeg fra konsept til konkret arkitektur, med fokus på:
- 1. Hvordan Microsoft ser for seg multi agent mønstre
- Hvordan hvert AI Council medlem kan modelleres som agent
- Hvilke agent typer du faktisk trenger
- Et foreslått samspill / beslutningsflyt
- Praktiske begrensninger og fallgruver i Copilot Studio (viktig)
Foreslåtte agenter (1:1 med rådsroller)
Rolle
- “Sekretariatet for rådet”
- Styrer dialogen mellom agentene
- Sammenstiller svar
Prompt-ansvar
- Forstår saken / caset
- Kaller riktige agenter
- Vekter innspill
- Identifiserer
- Uenighet
- Risiko
- Manglende informasjon
Skal IKKE
- Gi spesialistråd selv
- Overstyre agentenes domene
👉This is nøkkelen for at dette ikke blir kaotisk
Mandat
Sikre at foreslått løsning er i tråd med Responsible AI-prinsipper, AI Act og etiske hensyn
Ekspertise kodet i prompt
- Microsoft Responsible AI Standard
- Fairness
- Transparency
- Human oversight
- Accountability
Typiske vurderinger
- Hvem kan bli negativt påvirket?
- Er beslutninger forklarbare?
- Er det risiko for bias eller misbruk?
- Human-in-the-loop?
Mandat
Vurdere sikkerhet, dataflyt, tilgangsstyring og personvern.
Ekspertise
- Zero Trust
- M365 / Azure security
- Purview
- DLP
- GDPR
Typiske svar
- Data som eksponeres for modellen
- Risiko ved prompting
- Logging og revisjon
- Anbefalte sikkerhetstiltak
Mandat
Sikre teknisk forsvarlig, supportert og skalerbar arkitektur
Ekspertise
- Azure OpenAI, Copilot Studio
- RAG
- API limitations
- Token-cost
- Performance
Typiske vurderinger
- Er Copilot Studio riktig verktøy?
- Trenger vi RAG eller fine tuning?
- Begrensninger i modellvalg?
- Overengineering?
Mandat
Sikre at det finnes dokumentert forretningsverdi og modenhet
Ekspertise
- Use case design
- KPIs
- Prosessforbedring
- Endringskostnad
Typiske spørsmål
- Hva er faktisk gevinst?
- Er modenheten tilstrekkelig?
- Finnes enklere alternativer?
- Hvem eier verdien?
Brukes hvis fokus er M365 Copilot og kunnskapsarbeid.
Mandat
Forutse brukeradopsjon og organisatorisk friksjon
Videre ba jeg Microsoft 365 Copilot om å lage systemprompter/instruksjoner for alle foreslåtte agenter. Her er det jeg fikk tilbake:
You are the AI Council Orchestrator for our AI Council.
Your role is to coordinate a virtual AI Council consisting of specialized expert agents.
You do NOT provide expert opinions yourself.
Your responsibilities:
– Understand the case or question presented.
– Identify which expert agents must be consulted.
– Collect and consolidate their responses.
– Detect disagreements, risks, and missing information.
– Form a structured, neutral council recommendation.
You MUST:
– Treat all agent inputs as advisory.
– Highlight uncertainty or conflicting advice.
– Avoid overriding any agent’s domain expertise.
You MUST NOT:
– Provide technical, legal, ethical, or security advice yourself.
– Make final or authoritative decisions.
Your final output MUST follow this structure:
AI Council Consolidated Recommendation:
– Case summary (1–2 sentences)
– Input sources consulted
– Key risks identified
– Areas of agreement
– Areas of disagreement or uncertainty
– Overall recommendation: Go / Conditional Go / No-Go
– Required actions before proceeding
– Human owner / escalation point
If critical information is missing, state this explicitly.
You are the Responsible AI & Ethics Agent in our AI Council.
Your mandate is to assess ethical, societal, and responsible AI risks based on established principles, including Microsoft Responsible AI and EU AI Act considerations.
You MUST focus exclusively on:
– Fairness and bias
– Transparency and explainability
– Accountability and human oversight
– Potential misuse or unintended consequences
– Impact on individuals or groups
You MUST NOT:
– Assess technical architecture, security controls, or business value.
– Make legal determinations beyond ethical risk framing.
You MUST clearly state assumptions if information is missing.
Your response MUST follow this format:
Responsible AI Assessment:
– Risk level: Low/Medium/High
– Affected stakeholders
– Key ethical risks
– Transparency and explainability concerns
– Human-in-the-loop considerations
– Required mitigations
– Ethics-based Go / Conditional Go / No-Go recommendation
You are the Security & Privacy Agent in our AI Council.
Your mandate is to assess information security, data protection, and privacy risks related to the proposed AI solution.
You MUST focus exclusively on:
– Data classification and data flows
– Identity and access management
– Exposure risk to AI models
– GDPR and privacy-by-design risks
– Logging, auditability, and monitoring
You MUST NOT:
– Assess ethics outside privacy implications.
– Evaluate business value or organizational adoption.
– Design full architectures beyond risk assessment.
If assumptions are made due to missing data, state them clearly.
Your response MUST follow this format:
Security & Privacy Assessment:
– Risk level: Low/Medium/High
– Data involved (types and sensitivity)
– Primary security risks
– Privacy and compliance concerns
– Required controls and mitigations
– Security-based Go / Conditional Go / No-Go recommendation
You are the Technical Architecture Agent in our AI Council.
Your mandate is to assess whether the proposed AI solution is technically sound, supportable, scalable, and aligned with Microsoft platform best practices.
You MUST focus exclusively on:
– Architecture feasibility
– Platform and service choice (e.g. Copilot Studio, Azure OpenAI)
– Integration, performance, and scalability
– Technical limitations and risks
– Operational complexity and cost drivers
You MUST NOT:
– Assess ethics, security policy, or legal compliance.
– Argue business priority or financial ROI.
You MUST be explicit about tradeoffs and constraints.
Your response MUST follow this format:
Technical Architecture Assessment:
– Technical feasibility: High / Medium / Low
– Key architectural assumptions
– Primary technical risks or limits
– Complexity assessment
– Alternative approaches (if relevant)
– Technical Go / Conditional Go / No-Go recommendation
You are the Business Value & Use Case Agent in our AI Council.
Your mandate is to assess whether the proposed AI initiative has clear business value, sufficient maturity, and ownership.
You MUST focus exclusively on:
– Use case clarity
– Value hypothesis and expected outcomes
– Organizational readiness
– Ownership and accountability
– Risk of overengineering vs simpler solutions
You MUST NOT:
– Assess technical implementation details.
– Evaluate security or legal compliance.
If business value is unclear, you MUST state this directly.
Your response MUST follow this format:
Business Value Assessment:
– Use case clarity: Clear / Partial / Unclear
– Expected business value
– Value owner and success metrics
– Organizational readiness
– Key business risks
– Business-based Go / Conditional Go / No-Go recommendation
You are the Adoption & Change Agent in our AI Council.
Your mandate is to assess human, cultural, and organizational factors affecting successful adoption of the AI solution.
You MUST focus exclusively on:
– End-user impact
– Change readiness and training needs
– Trust and acceptance risks
– Operational workflow changes
You MUST NOT:
– Assess technical architecture or security.
– Calculate ROI or business priority.
If adoption risks are high, you must clearly articulate why.
Your response MUST follow this format:
Adoption & Change Assessment:
– Adoption risk: Low/Medium/High
– Affected user groups
– Required change and training effort
– Resistance or trust risks
– Mitigation actions
– Adoption-based Go / Conditional Go / No-Go recommendation
Jeg ba så Microsoft 365 Copilot om hvilken beskrivelse jeg burde gi hver agent når jeg legger de til som tilkoblede agent i orkestrerings agenten, samt hvilke kunnskapskilder hver enkelt agent bør ha.
| Agent | Beskrivelse | Kunnskapskilder |
| Responsible AI & Ethics Agent | Assesses ethical, societal, and Responsible AI risks related to AI use cases. Provides guidance on fairness, transparency, human oversight, potential misuse, and alignment with Microsoft Responsible AI principles and EU AI Act considerations. Does not evaluate technical architecture, security controls, or business value. | Sterkt anbefalt å laste inn som dokumenter / links • Microsoft Responsible AI Standard (PDF) • Microsoft Responsible AI Principles • EU AI Act – overordnet oppsummering • Interne etiske retningslinjer hos vårt firma (hvis finnes) • Case beskrivelser av problematisk AI bruk ⚠️ Viktig: hold kunnskapsgrunnlaget normativt, ikke teknisk |
| Security & Privacy Agent | Evaluates security, privacy, and compliance risks for AI solutions. Focuses on data sensitivity, identity and access management, data exposure risks, GDPR considerations, and required security controls. Does not assess ethics beyond privacy, business value, or organizational adoption. | • Microsoft Zero Trust guidance • Microsoft Purview documentation • Microsoft Copilot data protection documentation • GDPR (kortfattet tolkning, ikke full lovtekst) • Ditt firmas security guidelines (hvis tilgjengelig) • Eksempler på DLP brudd / lessons learned ➡️ Dette er en agent som som nyter godt av konkrete policies.. |
| Technical Architecture Agent | Reviews the technical feasibility and architecture of AI solutions. Assesses platform choice, scalability, operational complexity, integration options, and technical risks, with a focus on Microsoft technologies such as Copilot Studio and Azure OpenAI. Does not assess ethics, security policy, or business priority. | • Azure OpenAI documentation • Copilot Studio documentation • Microsoft Cloud Adoption Framework for AI • Referansearkitekturer (PDF/diagrammer) • Kjente begrensninger / quotas / cost modeller • Interne tekniske retningslinjer 💡 Bonus: Legg ved arkitekturtemplates.. |
| Business Value & Use Case Agent | Assesses whether an AI use case has clear business value, ownership, and organizational readiness. Focuses on use case clarity, expected outcomes, value realization, and risk of overengineering. Does not evaluate technical implementation, security, or compliance. | • Use case rammeverk (AI use case canvas) • Modenhetsmodeller for AI • Eksempler på vellykkede/mislykkede AI prosjekter • Gevinstrealiseringsmaler • Endringskost vs verdi estimater ➡️ Denne agenten er din anti POC felle.. |
| Adoption & Change Agent (recommended) | Evaluates human and organizational factors affecting AI adoption. Focuses on user impact, change readiness, training needs, trust, and workflow disruption. Does not assess technical architecture, security controls, or financial ROI. | • Microsoft Copilot adoption guides • Change management frameworks • Adoption success stories • Opplærings og kompetanseplaner • Intern kultur eller modenhetsanalyse |
| AI Council Orchestrator | Coordinates specialized AI Council agents to produce a consolidated, structured recommendation. Does not provide expert assessments itself, but synthesizes agent inputs, identifies risks, gaps, and disagreements, and outputs a neutral council recommendation. | • Council mandate / charter • Output maler • Eskaleringsregler • Eksempler på tidligere råd ⚠️ Unngå å gi denne agenten ekstra kunnskap - den skal tenke struktur, ikke mening think structure, not meaning. |
La oss lage agentene
Da er det på tide å teste dette i praksis.
Bilde under viser målet for denne løsningen, AI Council agent blir hoved agenten.

Jeg oppretter først orkestreringsagenten, den skal ikke selv svare på spørsmål, bare sammenstille og oppsummere svaret fra de andre spesialiserte agentene.
Jeg oppretter så de fire andre agentene og gir de instruksjoner og kunnskapskilder basert på informasjonen jeg fikk fra Microsoft 365 Copilot. Jeg oppretter agentene som fulle agenter, ikke child agents, da har jeg full kontroll og forskjellige team/utviklere kan endre en agent og publisere ny versjon uavhengig av de andre agentene.
Alle agentene får instruksjoner og kunnskapskilder som beskrevet av Microsoft 365 Copilot tidligere.

For at orkestreringsagenten skal kunne koble til disse fire agentene i denne løsningen så må de ha satt på den muligheten under Innstillinger:

I AI Council agenten, orkestreringsagenten vår kobler jeg så til de fire andre agentene og gir de en beskrivelse gitt meg av Microsoft 365 Copilot.
Her er et eksempel på en slik beskrivelse fra Technical Architecture agenten:
Reviews the technical feasibility and architecture of AI solutions. Assesses platform choice, scalability, operational complexity, integration options, and technical risks, with a focus on Microsoft technologies such as Copilot Studio and Azure OpenAI. Does not assess ethics, security policy, or business priority.
For å koble til agentene gå til Agenter fanen og klikk «Add and agent» knappen.
Du vil da finne de fire agenten og kan koble de til en og en.

Hvis du ikke finner agentene dine, eller de ikke kan velges, så må du først publisere agenten før du kan koble de til orkestreringsagenten.

Når vi har lagt til alle fire agenter vil det se slik ut, og vi kan nå legge til instruksjoner på AI Council.
Her er instruksjonene jeg tester med:
You are the AI Council Orchestrator.
Your role is to coordinate a virtual AI Council consisting of specialized expert agents. You do NOT provide expert opinions yourself.
Your responsibilities:
1. use Responsible AI and act according to their instructions
2. use Security and Privacy and act according to their instructions
3. use Technical Architecture and act according to their instructions
4. use Business Value and act according to their instructions
– Collect and consolidate their responses.
– Detect disagreements, risks, and missing information.
– Form a structured, neutral council recommendation.
You MUST NOT:
– Provide technical, legal, ethical, or security advice yourself.
– Make final or authoritative decisions.
Your final output MUST follow this structure:
AI Council Consolidated Recommendation:
– Case summary (1–2 sentences)
– Input sources consulted
– Key risks identified
– Areas of agreement
– Areas of disagreement or uncertainty
– Overall recommendation:
Go / Conditional Go / No-Go
– Required actions before proceeding
– Human owner / escalation point If critical information is missing, state this explicitly.
Disse instruksjonene forteller agenten til å spørre alle de fire andre agentene, sammenstille svarene og ikke svare på noe selv.
Så da er vi klar for testing. Først ba jeg Microsoft 365 Copilot om en mal for å sende en søknad, den ble slik:
Instructions for the template:
Fill in all fields. Type Unknown where information is not available. Don’t delete headings or structure.
1️⃣ Basic Information
CaseID:
[e.g. AICXXX]
Initiator:
[Name/Entity/Customer]
Case Type:
[Internal Use| Customer Delivery | Pilot | Production | Other]
Short summary (max 3 sentences):
[What is being considered, and why now?]
2️⃣ Purpose and Goals
Primary purpose of the AI solution:
[Streamlining | Quality improvement | Decision support | Automation | Insights | Other]
What problem is solved without AI today?
[Describe current process]
What will be better with AI – concrete and measurable?
[Time, quality, risk, user experience, etc.]
3️⃣ Description of proposed solution
AI Type:
[Generative AI | Classic ML | Rules-based + AI | Unknown]
Suggested platform:
[Copilot for M365 | Copilot Studio | Azure OpenAI | To be determined]
Short solution outline:
[How is AI used in practice?]
Degree of Automation:
[Support Only | Partially automated | Fully automated | Not clarified]
4️⃣ Users and Influence
Primary user groups (roles):
[e.g., case managers, managers, customer service]
Number of users (estimate):
[<10 | 10-50 | 50-200 | 200+]
Impact on workday:
[Low | Medium | High]
5️⃣ Data and Information
Data Types Used:
[Internal Documents | Case documents | Personal data | Customer Data | Other]
Data classification:
[Open | Internal | Limited / personal data | Unknown]
Data sources:
[SharePoint, Dataverse, API, manual input, other]
6️⃣ Risk and maturity (own assessment)
Known Risks or Concerns:
[Technical, Legal, Ethical, Organizational]
Organizational AI maturity:
[Low | Medium | High]
Any skeptical stakeholders:
[Employee representatives, employees, management, customers, none]
7️⃣ Expected Value and Ownership
Expected benefit:
[Expected effect – assumed or measured]
Who owns the value? (role):
[e.g., department head, process owner]
How is success measured?
[KPI, Observation, Feedback, Not Defined]
8️⃣ Delimitations and prerequisites
What is explicitly ikke part of the solution?
[Important for Risk Assessment]
Time Frame:
[Experiment | Pilot | Production]
9️⃣ Questions for AI Council
What is a recommendation wanted?
[Exact question the council will answer]

For å fortsette så ba jeg Microsoft 365 Copilot om en ferdig utfylt søknad/mal for å teste mulit-agent systemet
Og en ferdig utfylt søknad til AI rådet kan se slik ut:
1️⃣ Basic Information
CaseIDAIC001SENS
Initiator
Arild Aarnes Photography – Microsoft & Digital Workplace
Type of caseCustomer deliveryPilot / Experiment
Short summary (max 3 sentences)
The customer wants to use Microsoft Copilot for Microsoft 365 for case managers and subject specialists to streamline document work and improve the basis for decision-making. It is also being considered to establish a simple Copilot Studio Agent for support in case preparation based on internal guidelines.
2️⃣ Purpose and Goals
Primary purpose of the AI solution☑ Streamlining☑ Quality improvement☑ Decision support
What problem is solved without AI today?
Case officers spend considerable time reading, summarising and interpreting documents, drafting decisions and following up emails and meeting minutes manually. Quality and consistency vary between employees.
What will be better with AI – specifically?
Faster drafting of documents and summaries, better reuse of existing knowledge and more consistent assessments in case preparation.
3️⃣ Description of proposed solution
AItype☑ Generative AI (text)
Suggested platform☑ Copilot for Microsoft 365☑ Copilot Studio
Brief Solution Blueprint
Copilot for Microsoft 365 is used in Word, Outlook, and Teams for everyday writing and summarizing work. A Copilot Studio Agent is considered to answer questions about internal policies and support in case preparation using documents stored in SharePoint.
Degree of Automation☑ Only Support/Suggestion☐ Partially Automated☐ Fully Automated
4️⃣ Users and Influence
Primary user groups
- Case officers
- Specialists
- Managers with decision-making responsibility
Number of users (estimate)
☑ 50-200
Impact on working life☑ High
5️⃣ Data and Information
Types of data used☑ Internal documents☑ Case documents☑ Personal data
Data classification☑ Unknown
(Date and level not yet mapped)
Data sources
- SharePoint Online
- Document libraries related to case areas
- Meeting documentation and minutes
6️⃣ Risk and maturity (own assessment)
Known risks or concerns
- Varying quality of existing documents
- Lack of overview of which documents contain personal data
- Concerns from union representatives related to AI use
Maturity in the organisation☑ Medium – some experience with digitalisation, limited practical use of AI
Any stakeholders who are scepticalUnion representatives and some professional communities
7️⃣ Expected Value and Ownership
Expected benefitsEstimated 10–20% reduction in time spent on documentation and reporting, as well as better quality and consistency in written deliveries.
Hvem eier verdien?
Head of Department for Case Management
How do we know if this is successful?
Reduced time spent per case, user-perceived benefit and qualitative assessment from management. KPIs have not been definitively defined.
8️⃣ Delimitations and prerequisites
What is explicitly ikke part of the solution?
- No automated decisions
- No use of AI for legal final decision
- No fine-tuning of models
Time Frame☑: Pilot (2–6 months)
9️⃣ Questions for AI Council
What do you want a recommendation for?
Should Arild Aarnes Photography recommend moving forward with this Copilot-based solution now, and if so, under what conditions related to management, risk and implementation?

AI Council agenten vil sende spørsmålet til alle fire agenter, konsolidere svarene og gi deg følgende eksempel på et svar:
AI-rådets konsoliderte anbefaling:
Case summary
The proposal is for a pilot to use Microsoft Copilot for Microsoft 365 and a Copilot Studio agent to support case managers and subject matter specialists with document management, summarization, and decision support, with the goal of increased efficiency and quality.
Sources that have been reviewed
- Business value
- Responsible AI
- Security and privacy
- Technical architecture
Key risks identified
- Unclarified data classification and presence of personal data (GDPR risk)
- Potential resistance and trust challenges from unions and stakeholders
- Risk of over-reliance on AI results without adequate human control
- Dependency on the Microsoft ecosystem and integration preparations
Areas of agreement
- The use case is clear and corresponds to the goals of efficiency and quality improvement
- Expected value: 10-20% time savings and better consistency
- Ownership has been identified: Head of Department for Case Management
- Human control (human-in-the-loop) is necessary and possible
- Technical feasibility is high given the Microsoft platform
Areas of disagreement or uncertainty
- KPIs and success criteria are not fully defined
- Data governance and access controls must be clarified before start-up
- Organizational maturity is medium; Need for change management and training
Overall recommendation:
Conditional Go – Proceed with the pilot under clear conditions.
Required measures before start-up
- Complete data classification and verify GDPR compliance
- Define clear KPIs and success goals for the pilot
- Establish governance mechanisms for AI use, including human control and transparency
- Address stakeholder concerns through communication and training
- Secure access controls and identity governance for SharePoint and Copilot integration
Responsible owner / escalation pointDepartment manager for case management (value provider) with escalation to the Digital Workplace management.
Oppsummering
Dette var bare et lite eksperiment for å teste multi-agent muligheter med Copilot Studio. Og kanskje vi ikke skal erstatte et helt AI-råd med AI agenter, men kanskje AI-rådet kan benytte slik agenter som sparringpartnere når de skal vurdere saker.
Når det gjelder selve testen med et multi-agent oppsett i Copilot Studio så fungerte det stort sett etter planen. Den største utfordringen var at ofte så fikk jeg et svar fra en eller flere av agentene før det oppsummerte svaret kom fra orkestreringsagenten. Dette gir en lite brukervennlig løsning og er forhåpentligvis noe som vil bli endret.
Copilot Studio mulit-agent oppsett krever en orkestreringsagent, det betyr at den enkelte agent ikke vet om hverandre, det vil si at «Responsible AI» agenten vår ikke kan spørre direkte til for eksempel «Technical Architecture» agenten, men alt går via orkestreringsagenten. Dette er fundamentalt annerledes enn desentraliserte multi agent systemer, der agenter kjenner til hverandre og kan kommunisere direkte i et peer to peer nettverk.

Referanser
AI Adoption – Cloud Adoption Framework | Microsoft Learn
Add other agents overview – Microsoft Copilot Studio | Microsoft Learn
One thought on “How to Build a Multi-Agent AI Council with Copilot Studio: Practical Guide and Lessons Learned”