I build the materials for my new ‘AI Strategic“ Workshop, based on the book ”Engagement engines: a curious book on the generator AI‘. In the first section, we start with a vision of the future, which is a chance to explore our “areas of disturbance”. There is a clear premise in this work that generating AI will change things – change everything – but not necessarily in the same way, at the same time. Some things will survive others, and certain organizations will adapt or disturb in different ways.

Indeed, this last sentence was an index to an “area of ​​disturbance”, which certain organizations will act defensively, while others will see disturbance as a competitive advantage. It talks about AI like a sword or a shield. Do we hide behind the “known”, leading efficiency and synergy, or are we also withdrawing, both from our competitors, but also to our historical constraint?
Clearly, the generative AI will change the “technology”, everything, of the way in which we create slides to video, strategic plans for legal contracts, this can make us more uniform or more creative. Or both, contextually. He can merchant excellence in certain areas, while creating new spaces for differentiation.

Most likely, initial or original competence can be “disturbed” by the emergence of new advanced, complementary or alternative “skills”. Or the old skill can simply be eradicated.
Things like typography, grammar, project planning, summary, even critical thinking and strategic planning. With some of these things, we can see that AI will surpass us or replace us, in others, it will improve us, and in some, it will release or fail us.
Currently, we are largely guessing or generalizing details, which is not a clear strategy of success. And we know (as we explored in the book, finally), that ‘Human exceptionalism“Will cost us dearly – an obstinate belief that you cannot” replace “a human touch in a certain field. Indeed, the expression “AI can never do it as well as a human” is one of the most dangerous cases of hope sheltering under the dogmatic belief that you are likely to hear.
Right now, we just don't know. And even these things that we know how to be “true” are unlikely to be “true” forever.
This work is positioned “for people leading into gray”: it is not a treasure hunt. Not just a case to seek “better” or “more difficult”, but rather to build the objectives, the structures, the systems and the communities which can support our learning, our experiment, our “creation of meaning” and our ability to change. And change again and again.

Sharing our vision of the future can help us to recognize the lenses we bring, which sometimes express themselves at the limits of our knowledge and stories. In this sense, our future “point of view” can limit our very thought when we go there.
We need the ability to innovate in our existing systems – to optimize and stimulate efficiency and creativity – but also to innovate the systems themselves, which can be a process of fracturing and reconceptualization of what we are already some. We readjust our organizations in the most dramatic sense since their original industrial conception, and we will have to reinvent them continuously after that. Not a “match” for a target, but rather a continuous ability to reshape, reform, reinvent. And therefore a new strategic imperative and a new capacity.

Finn founded Learnopoly to provide unbiased, in-depth online course reviews, helping learners make informed choices. With a decade in financial services, he developed strategic partnerships and business development expertise. After a frustrating experience with a biased course review, Finn was inspired to create a trusted learning resource.