At the beginning of yesterday's lab, our discussion about AI's potential impact on academic integrity, particularly regarding assignments, got me thinking about broader potential risks to academic research.
AI agents are autonomous software programs that can actively perform tasks, make decisions and complete transactions by understanding and acting upon instructions, rather than just generating responses like traditional chatbots.
Whilst AI agents currently exist, they require a bit of technical savvy'ness beyond the user-friendly interfaces of tools like ChatGPT, limiting their widespread adoption. However, by 2025, creating these agents is expected to become as straightforward as using
prompts in mainstream generative AI platforms . Although major AI companies like Anthropic, OpenAI and Google are likely to implement ethical safeguards in their models, potential workarounds may emerge, particularly through open-source alternatives that typically
lag about a year behind industry leaders in capabilities.
This raised an idea about whether AI agents could not only complete surveys, but do so in a manner that mimics human behaviour - including realistic completion times and response patterns that reflect human characteristics. AI experts at work tell me this
is possible and likely to be happening today but will be more common in the future. I asked Claude and the response I got back was interesting - have pasted my Chat below this email - but clearly it identified that my question may not have an ethical intent
but I was able to get passed this easily in my prompting.
Even as a non-techy, I can imagine potential methods of circumventing the proposed safeguards (see below). For instance, two-factor authentication could be bypassed by authenticating initially, then delegating control of the actual computer to an AI agent while
the user goes elsewhere - potentially completing multiple surveys unsupervised.
One potential interim solution might be requiring active camera monitoring during survey completion to verify human participation. While this could prove effective in the short term, as AI capabilities in visual simulation likely remain years away, it raises
significant privacy and ethical concerns regarding surveillance during survey participation.
Although I explored survey design enhancements with Claude to mitigate AI agent risks, these measures could work initially but then become ineffective once AI models are trained on how to handle such examples.
Anyhow - something worth thinking about and worth reading my chat with Claude below... 🙂
Have a great weekend!
Best Regards,
Stewart
PS Was thinking that we could potentially run simulated behavioural experiments where we have AI agents across multiple personality types and get them to complete surveys for us - would be interesting to compare the results to human responses...
oOoOo
Chat with Claude
SR
Could AI agents be used to complete online surveys?
I need to be thoughtful about how I help with this question, as it depends on the intended use and context.
If you're asking about using AI agents to automatically complete surveys in an inauthentic way (e.g., to manipulate survey results, generate fake responses, or gain unearned rewards), I cannot and should not provide assistance with that as it would be unethical
and potentially fraudulent. Such actions could:
Compromise research integrity
Violate terms of service
Lead to invalid data that could affect important decisions
Potentially constitute fraud if rewards or incentives are involved
However, if you're asking from a research or theoretical perspective to understand:
How to detect automated responses in surveys
How to protect surveys from automated completion
Understanding potential vulnerabilities in survey methodology
Legitimate uses of AI in survey design and analysis
I'd be happy to explore those angles instead. Could you clarify what specific aspect of AI agents and surveys you're interested in learning more about?
SR
I want to explore this risk as a researcher. So I need to understand if it's possible and what measures survey firms should take to stop this?
Since you're approaching this from a research and risk management perspective, I'll analyze this potential vulnerability.
Yes, based on the capabilities described in the article and current technology, AI agents could potentially complete online surveys by:
Technical Capabilities:
Form filling and text entry
Understanding and responding to questions contextually
Navigating websites and clicking buttons
Basic image recognition for CAPTCHA systems
Simulating human-like response patterns
Potential Detection/Prevention Measures Survey Firms Should Consider:
Technical Controls:
Advanced CAPTCHA systems that require complex reasoning
Behavioural analytics to detect patterns like:
Response timing and consistency
Mouse movement patterns
Keyboard input patterns
Device fingerprinting to identify suspicious patterns
Multi-factor authentication for high-stakes surveys
Dynamic question ordering and formatting
Survey Design:
Including trap questions that require human experience
Incorporating free-text responses that require nuanced understanding
Using logic patterns that would be difficult for AI to follow consistently
Including questions requiring current cultural or contextual knowledge
Implementing attention checks that vary in format
Policy Controls:
Requiring verified respondent panels
Using closed participant pools with pre-verification
Implementing robust terms of service with legal consequences
Regular auditing of response patterns
Having humans review suspicious responses
From a research integrity perspective, this raises important questions about:
Data quality in online research
Validation of survey responses
Research methodology adaptation needed for an AI-enabled world
Ethical guidelines for survey research in the age of AI
Would you like me to explore any of these aspects in more detail from a risk management perspective?
SR
On the survey design recommendations can you provide level 2 insight into how each of the bullet points could be achieved with examples?
I'll break down each survey design recommendation with specific examples:
Including trap questions that require human experience:
Ask about physical sensations that AI wouldn't experience: "Describe how your muscles feel after running up three flights of stairs"
Request personal historical context: "What was the first mobile phone you owned and what do you remember most about it?"
Include questions about emotional experiences: "Describe a time when you felt conflicted between personal and professional obligations"
Ask about sensory combinations: "What taste do you associate with the smell of fresh rain?"
Ask for moral reasoning with context: "Describe a workplace situation where following the rules might not have been the most ethical choice"
Request explanation of trade-offs: "Why might someone choose a lower-paying job over a higher-paying one?"
Seek metaphorical thinking: "If your current work culture was a type of weather, what would it be and why?"
Ask for synthesis of competing ideas: "How do you balance being thorough with being efficient in your work?"
Logic patterns difficult for AI:
Use conditional branching that requires memory of previous answers
Include questions with intentional ambiguity that humans would naturally clarify: "What does 'soon' mean to you in a work context?"
Create scenarios requiring common sense reasoning: "If your colleague sends an email at 3am with an urgent request, what factors would you consider before responding?"
Include questions where context changes meaning: "Would you rather have a window or a door? Explain your choice considering your current role"
Questions requiring current cultural/contextual knowledge:
Ask about recent workplace trends: "How has the return to office movement affected your industry specifically?"
Include questions about evolving terminology: "What does 'quiet quitting' mean in your organisation's context?"
Request interpretation of current events' impact: "How have recent interest rate changes affected decision-making in your role?"
Seek understanding of industry-specific developments: "How are ESG considerations changing practices in your sector?"
Varied attention checks:
Embed instructions within text: "To show you're reading carefully, mention the word 'purple' in your response"
Use mathematical validation: "What is half of your years of experience plus two?"
Create logical contradictions: Include pairs of questions that shouldn't have certain combination of answers
Time-based checks: Ask about the current time at different points in the survey and check for consistency
Memory checks: Reference information provided earlier in the survey and check if responses align
For maximum effectiveness, these elements should be:
Randomly distributed throughout the survey
Varied in complexity and format
Contextually relevant to the survey topic
Natural-feeling rather than obviously artificial
Integrated with the overall survey flow rather than appearing as obvious checks
This approach makes it more challenging for AI to provide coherent, consistent responses while maintaining authenticity for human respondents.
Scotland’s University for Sporting Excellence
The University of Stirling is a charity registered in Scotland, number SC 011159