OpenAI Is Asking Contractors to Upload Work From Past Jobs to Evaluate the Performance of AI Agents
Summary
OpenAI has reportedly asked third-party contractors to upload real work they did at current or previous jobs so the company can use those files to evaluate how well AI agents perform office tasks. The request, documented in materials obtained by WIRED and coordinated with the training-data firm Handshake AI, asks for concrete deliverables (Word docs, PDFs, spreadsheets, images, repos) rather than summaries, and instructs contractors to remove personally identifiable information and proprietary data before uploading.
The exercise is part of OpenAI’s broader effort to establish human baselines across occupations and compare them with AI performance as it pushes toward more capable agents. The company says examples can also be fabricated to mimic realistic on-the-job responses, but emphasises that shared files should reflect actual, long-form work people have done.
Legal and privacy concerns are central: lawyers warn that even scrubbed documents can expose trade secrets or breach nondisclosure agreements, and some potential data sources reportedly declined to participate because they doubted effective anonymisation. The story also highlights the expanding market for skilled contractors who produce higher-quality training data for AI labs.
Key Points
- OpenAI asked contractors to upload real deliverables (actual files, not summaries) from past or current jobs to evaluate AI agents.
- The company frames the work as building human baselines to compare AI performance across real-world tasks as part of its evaluation programme.
- Contractors are told to remove personal information and proprietary or material nonpublic details; OpenAI referenced tools and guidance for scrubbing files.
- Legal experts warn of trade-secret misappropriation and potential NDA breaches if confidential material is inadvertently shared.
- The request illustrates a trend: AI firms hiring skilled contractors and paying more for high-quality, real-world training and evaluation data.
- Some prospective data sources refused to participate, doubting that personal and confidential information could be reliably removed.
Context and relevance
This piece sits at the intersection of AI development, labour and data governance. As models aim to automate complex office work, labs are increasingly sourcing realistic, high-value examples to measure progress. That raises thorny questions about consent, contractual obligations, trade secrets and the limits of anonymisation—issues that matter to employees, employers and regulators, especially under frameworks like GDPR.
Why should I read this?
Short version: if you work with documents, NDAs or any kind of proprietary data, this affects you. It shows how AI firms are trying to teach agents real job tasks — and how messy that can get when actual workplace files and legal rules collide. Worth five minutes to know the risks.
Source
Source: https://www.wired.com/story/openai-contractor-upload-real-work-documents-ai-agents/
