Some things have greater value for us than others: darling not only for what they are, but how they make us feel. It can be as simple as a jewel, or a favorite hat (whose value is imbued with the context that we surround them), or complicated as our reputation, and the senses of identify and agency.

Understand the social context From AI invites us to consider these things: because a large part of what we hold in the highest value is also the one that is most open to disturbances. More specifically, our superior thought and the artefacts that we produce thought.
The immediate and tactical investment of investment in a generative AI for many organizations is on efficiency: this week, I heard two organizations describe how 80% of their budget and their effort is in this space. They do “innovative” work, but often as a special project or stampede. The problem is that it is relatively easy to innovate aside, but to reintegrate learning this innovation leads to cultural conflicts.
Instead, we should consider efficiency in parallel with the more central story around the disturbance of our most precious things, and have this conversation in our practice, not abstract.
In AI's strategic work, I envisage this through two dimensions: in the oldest roles of our organizations, our value is maintained in the higher order functionalities described above. We expect the leaders to process information, situate it in an existing context and learn the body of knowledge, we expect them to use officially validated methodologies, as well as experiential patterns built individually at the service of this. On this foundation, we expect them to diagnose and define actions, to socialize this and distribute efforts, and measure impacts and risks. Finally, a precious characteristic of our leaders (and the structures they build) is to create artifacts from this: reports, forecasts, strategies, hypotheses, marketing and surveys. And much more: the whole range of what we think and do, that we all hold dear in our conception of our roles and our self.
And who are all disturbed.
Genai does not just do these things better: that does that kind of thing differently.
It can hold a broader context, can dissociate the emotion from the decision (and from history, or from any other aspect of the context). It can prototyper multiple possible conceptions of a truth more quickly, and simultaneously, it can contain frames in which to run and assess them. While AI does not “think” like us, the mechanisms by which it generates “responses” is similar in that they create new results (unlike Google research which largely produces the same results when they are repeated, AI can produce various interpretations. In this, it is as erratic as humans).
As was quickly used by organizations, AI can produce the artifacts that we hold so expensive, but do it more quickly and of better quality. This is important not only in what it does, but how, because while humans can spend disproportionate time on final refinement and publishing, AI can produce “complete” work quickly and items it faster. The prototyping, test, exploration, dispute, ideation and rejection mechanism can all be considered different in progress and result.

An anthropo-technical perspective would indicate that we favor a “human plus machine” result, and in general, it is probably the correct point of view: for the moment, we are not trying to fully automate society (we are only in the first stages of taking into account our main relationship with work and thought, creativity and goal). But this point of view will not be based on efficiency alone (where we external the collection of bins and the creation of banal calculation sheets to bots, and keep the precious elements (commercially precious, but precious for our identity and our agency). How we will feel when “value” is noted and shared, in almost real time and when it calls into question our seniority and our reputation.
These are not entirely hypothetical perspectives: we already say to disintegrate the aspects of “work” of “role”, “capacity” of “jobs” and “innovation” of “structure”. It is simply more than not on the same path.
“Strange” is probably the right way to see the future: we are naive to imagine that an anthropo-technical world will be familiar or comfortable in our vision of the existing world. And we will be naive if we do not engage in conversations and experimentation around this future before it landed.
A philosophy of the social context of AI is not a philosophy that ignores technology, but rather what places it fully from a sociological and cognitive psychological perspective. The social context of people (which is our most fundamental truth) and the disturbance of AI.
Our most precious things are not always physical: they can be the artifacts of our pride, our belief, our identity, our skill, our crafts and our feeling of belonging. Our belief in our superiority and our fragile intellectuals. And all this is firmly in the space of disturbances.
#Workingoutloud on the social context of AI and strategic AI.

At Learnopoly, Finn has championed a mission to deliver unbiased, in-depth reviews of online courses that empower learners to make well-informed decisions. With over a decade of experience in financial services, he has honed his expertise in strategic partnerships and business development, cultivating both a sharp analytical perspective and a collaborative spirit. A lifelong learner, Finn’s commitment to creating a trusted guide for online education was ignited by a frustrating encounter with biased course reviews.