Really funny and ironic insider’s guide to decoding Big Tech’s language…
an insider’s guide to decoding Big Tech’s language and challenging the assumptions and values baked in:
accountability (n) – The act of holding someone else responsible for the consequences when your AI system fails.
accuracy (n) – Technical correctness. The most important measure of success in evaluating an AI model’s performance. See validation.
adversary (n) – A lone engineer capable of disrupting your powerful revenue-generating AI system. See robustness, security.
alignment (n) – The challenge of designing AI systems that do what we tell them to and value what we value. Purposely abstract. Avoid using real examples of harmful unintended consequences. See safety.
artificial general intelligence (phrase) – A hypothetical AI god that’s probably far off in the future but also maybe imminent. Can be really good or really bad whichever is more rhetorically useful. Obviously you’re building the good one. Which is expensive. Therefore, you need more money. See long-term risks.
audit (n) – A review that you pay someone else to do of your company or AI system so that you appear more transparent without needing to change anything. See impact assessment.
augment (v) – To increase the productivity of white-collar workers. Side effect: automating away blue-collar jobs. Sad but inevitable.
beneficial (adj) – A blanket descriptor for what you are trying to build. Conveniently ill-defined. See value.
by design (ph) – As in “fairness by design” or “accountability by design.” A phrase to signal that you are thinking hard about important things from the beginning.
compliance (n) – The act of following the law. Anything that isn’t illegal goes.
data labelers (ph) – The people who allegedly exist behind Amazon’s Mechanical Turk interface to do data cleaning work for cheap. Unsure who they are. Never met them.
democratize (v) – To scale a technology at all costs. A justification for concentrating resources. See scale.
diversity, equity, and inclusion (ph) – The act of hiring engineers and researchers from marginalized groups so you can parade them around to the public. If they challenge the status quo, fire them.
efficiency (n) – The use of less data, memory, staff, or energy to build an AI system.
ethics board (ph) – A group of advisors without real power, convened to create the appearance that your company is actively listening. Examples: Google’s AI ethics board (canceled), Facebook’s Oversight Board (still standing).
ethics principles (ph) – A set of truisms used to signal your good intentions. Keep it high-level. The vaguer the language, the better. See responsible AI.
explainable (adj) – For describing an AI system that you, the developer, and the user can understand. Much harder to achieve for the people it’s used on. Probably not worth the effort. See interpretable.
fairness (n) – A complicated notion of impartiality used to describe unbiased algorithms. Can be defined in dozens of ways based on your preference.
for good (ph) – As in “AI for good” or “data for good.” An initiative completely tangential to your core business that helps you generate good publicity.
foresight (n) – The ability to peer into the future. Basically impossible: thus, a perfectly reasonable explanation for why you can’t rid your AI system of unintended consequences.
framework (n) – A set of guidelines for making decisions. A good way to appear thoughtful and measured while delaying actual decision-making.
generalizable (adj) – The sign of a good AI model. One that continues to work under changing conditions. See real world.
governance (n) – Bureaucracy.
human-centered design (ph) – A process that involves using “personas” to imagine what an average user might want from your AI system. May involve soliciting feedback from actual users. Only if there’s time. See stakeholders.
human in the loop (ph) – Any person that is part of an AI system. Responsibilities range from faking the system’s capabilities to warding off accusations of automation.
impact assessment (ph) – A review that you do yourself of your company or AI system to show your willingness to consider its downsides without changing anything. See audit.
interpretable (adj) – Description of an AI system whose computation you, the developer, can follow step by step to understand how it arrived at its answer. Actually probably just linear regression. AI sounds better.
integrity (n) – Issues that undermine the technical performance of your model or your company’s ability to scale. Not to be confused with issues that are bad for society. Not to be confused with honesty.
interdisciplinary (adj) – Term used of any team or project involving people who do not code: user researchers, product managers, moral philosophers. Especially moral philosophers.
long-term risks (n) – Bad things that could have catastrophic effects in the far-off future. Probably will never happen, but more important to study and avoid than the immediate harms of existing AI systems.
partners (n) – Other elite groups who share your worldview and can work with you to maintain the status quo. See stakeholders.
privacy trade-off (ph) – The noble sacrifice of individual control over personal information for group benefits like AI-driven health-care advancements, which also happen to be highly profitable.
progress (n) – Scientific and technological advancement. An inherent good.
real world (ph) – The opposite of the simulated world. A dynamic physical environment filled with unexpected surprises that AI models are trained to survive. Not to be confused with humans and society.
regulation (n) – What you call for to shift the responsibility for mitigating harmful AI onto policymakers. Not to be confused with policies that would hinder your growth.
responsible AI (n)- A moniker for any work at your company that could be construed by the public as a sincere effort to mitigate the harms of your AI systems.
robustness (n) – The ability of an AI model to function consistently and accurately under nefarious attempts to feed it corrupted data.
safety (n)- The challenge of building AI systems that don’t go rogue from the designer’s intentions. Not to be confused with building AI systems that don’t fail. See alignment.
scale (n)- The de facto end state that any good AI system should strive to achieve.
security (n) – The act of protecting valuable or sensitive data and AI models from being breached by bad actors. See adversary.
stakeholders (n) – Shareholders, regulators, users. The people in power you want to keep happy.
transparency (n) – Revealing your data and code. Bad for proprietary and sensitive information. Thus really hard; quite frankly, even impossible. Not to be confused with clear communication about how your system actually works.
trustworthy (adj) – An assessment of an AI system that can be manufactured with enough coordinated publicity.
universal basic income (ph) – The idea that paying everyone a fixed salary will solve the massive economic upheaval caused when automation leads to widespread job loss. Popularized by 2020 presidential candidate Andrew Yang. See wealth redistribution.
validation (n) – The process of testing an AI model on data other than the data it was trained on, to check that it is still accurate.
value (n) – An intangible benefit rendered to your users that makes you a lot of money.
values (n) – You have them. Remind people.
wealth redistribution (ph) – A useful idea to dangle around when people scrutinize you for using way too many resources and making way too much money. How would wealth redistribution work? Universal basic income, of course. Also not something you could figure out yourself. Would require regulation. See regulation.
withhold publication (ph) – The benevolent act of choosing not to open-source your code because it could fall into the hands of a bad actor. Better to limit access to partners who can afford it.