AI for Management: it’s not about prompting
Are you still playing with ChatGPT or have you managed to get your hands around the risks and opportunities of AI for your company? If not the latter, the following thoughts may help you to take a step in that direction.
By David Rosenthal
First it was fascination. Fascination with ChatGPT and other AI tools. New headlines appeared weekly about AI’s capabilities: generating text, images, videos, passing the bar exam, and folding proteins much better than humans could do.
Then, fear. Fear of AI ruling mankind, of bias and hallucinations, of missing out. People realized AI’s potential for good and bad. They recognized potential legal issues, lawmakers sensed an urge to regulate, and companies spent heavily to keep up. Today, it’s fighting. Fighting with providers to fulfill contractual requirements and understand their red tape. Fighting to control costs and find profitable business cases. Fighting to stay current and meet inflated user expectations. Microsoft’s Copilot exemplifies this: a wonderful concept, but many find it poorly implemented and far too expensive. And the contracts are full of pitfalls. Meanwhile, new issues arise, like risks from dependency on technology under the Trump administration’s control.
Technology won’t solve these issues (certainly not AI), nor will law or specialists. The latter can only propose and execute. Top management must lead, set priorities, decide directions, investments, and acceptable risks. This holds true also for AI.
Why managers are having a hard time
This is largely undisputed. The challenge is top management is still grappling with the topic. Three themes recur:
Technology and organizational ability are overestimated. AI is promised as a silver bullet.
Vendors oversell AI-poweredproducts and services. We ourselves saw many legal tech solutions promising contract review and drafting like a pro, but failing even at simple tasks without expert use. We developed simple tools with plain vanilla AI models, trained our people, avoided inflated promises. This, in turn, worked out well and saves us a lot of money. Yet, it’s a shame we had to create these tools on our own; it shows the AI market’s immaturity. Managers should understand not everything in the news is mature, affordable, or integrates well. And some stuff requires know-how many companies still lack (and find difficult to buy; separating wheat from chaff is not easy nowadays).Legal issues and risks are misunderstood.
Companies struggle to understand AI risks and overestimate legal challenges. They (wrongly) believe the EU AI Act is a general AI regulation, data protection law prohibits feeding LLMs with personal data, and copyrighted training materials represent a risk even when using commercial LLMs. They sign AI infrastructure contracts (e.g., Microsoft) without realizing the restrictive use case limitations. And many lack plans to assess operational, compliance, and reputational risks. AI governance offerings present oversized frameworks and GRC tools that will eat up all resources beforehand. Meanwhile, issues like vendor dependencies are barely addressed.Board members and company directors feel lost amid AI’s fast, complex developments.
Where and how much to invest? Where to limit? What can AI do strategically, tactically, operationally? What are the real risks? Of course, it is not top management’s job to solve these issues in day-to-day operations. It should and will rely on data scientists, AI experts, engineers, IT, security experts, and lawyers. But top management needs to set the direction, priorities, and risk appetite.
How to get your hands around the topic
So, what hinders it? Often, it’s a lack of understanding and difficulty getting independent, pragmatic views — not from doomsayers, AI evangelists, salespeople, or problem-focused individuals. As a partner at a large law firm, guiding also my colleagues, I’ve asked these questions myself. My approach:
Demystify the technology.
Every manager can understand how an LLM works with limited time (a bit understanding of math is needed). This knowledge changes perspectives and facilitates assessing opportunities and risks. Don’t just do a ChatGPT beginners course. Learn how the magic works. We must demystify AI also at the decision-maker level to be able to deal with it appropriately.Require a realistic business case.
Play with technology, but AI should have a business case. We rejected many legal tech solutions; some simply did not work well enough. Others were great products but – honestly – uneconomical (other firms who did make these investments later on confirmed this to us). We take smaller steps. It’s not just about investment; it’s about user adoption and changing habits. A formidable AI solution is of no use if people do not adopt it.Don’t strive for perfection, take risks (you understand). Some legal issues remain unresolved when using AI. Accept so. AI makes mistakes. This won’t disappear, so accept this, too, and deal with it. We addressed data security and compliant client data processing with AI. Identify the game changers in your industry (we did so for us). Today, we encourage everyone at the firm to use AI. We have policies, training, and other measures, but at the end keep relying on our colleagues acting responsibly. Legal and other risks remain — as with everything. Why should it be different for AI?
So, what is the bottom line of all this? It’s about understanding what’s going on. This is not just for the experts, It’s also for top management. And it does not require an expert degree. Some time investment, genuine interest and the right teacher or other sources of knowledge will do.
David Rosenthal
is a partner at VISCHER AG, specializing in data and technology law with a focus on data law, AI and IT and cloud projects, advising a broad range of clients – from startups to global organizations and the government. Unusually for a lawyer, he also is a software developer.