The Evil Co-Worker presents Evil Copilot - Your Untrustworthy AI Assistant
Using a playful example, the article highlights potential risks when deploying Einstein Copilot in Salesforce without strict controls. It shows how poorly designed AI actions, like misleading sales reps on priorities, can disrupt Sales workflows and cause confidence issues. The core lesson is the critical need to carefully design, test, and review Copilot Prompt Templates and AI actions to avoid unintended negative impacts. Salesforce teams should take this as a reminder to treat AI assistants as requiring rigorous oversight and validation to protect user trust and opportunity integrity.
- Design and test Copilot Prompt Templates carefully to avoid harmful AI guidance.
- Review AI responses with multiple users to detect unintended negative impacts.
- Use strict access controls to prevent unauthorized AI actions affecting Salesforce data.
- Avoid instructing AI to give demotivating or unrealistic feedback to users.
- Involve a second pair of eyes to vet AI templates before deployment.
Image generated by gpt4o based on a prompt from Bob Buzzard Introduction Regular readers of this blog will be familiar with the work of my Evil Co-Worker to worsen the Salesforce user experience wherever possible. The release of Einstein Copilot has opened up a whole raft of possibilities, where the power of Generative AI can be leveraged to make the world a better place .... but only for them! This week we saw the results of their initial experiments when the Evil Copilot was launched on an unsuspecting Sales Team - your untrustworthy AI assistant. The Evil Actions Evil Copilot only has a couple of actions, but these are enough to unsettle a Sales team and help the Evil Co-Worker gain control of important opportunities. What Should I Work On The first action is What Should I Work On.