
There’s a piece of administration recommendation that circulates broadly, feels intuitive, and is quietly changing into one of many extra harmful concepts in enterprise expertise management.
It goes one thing like this: when you cross into administration, your job is to set route, develop folks, and take away obstacles. The technical particulars — the precise habits of the methods, the friction within the workflows, the sting instances within the instruments — are your crew’s area now, not yours. Staying too near that work alerts mistrust, creates bottlenecks, and distracts you from the “actual” job of management.
In a steady surroundings, there’s logic to this. When the underlying methods change slowly, when you possibly can safely assume final 12 months’s psychological mannequin nonetheless approximates this 12 months’s actuality, delegating technical depth is an affordable technique for scaling your individual consideration. You synthesize. You belief your crew’s judgment. You use one degree of abstraction above the small print.
However that isn’t the surroundings we’re working in. And the leaders who haven’t up to date this assumption are accumulating a type of strategic debt that can finally come due.
The false binary and the place it breaks down
Many organizations nonetheless body profession growth in expertise as a fork within the highway: the person contributor observe, or the administration observe. The implication embedded in that framing is that technical depth and management accountability exist in stress — that gaining one means ceding the opposite.
To be truthful, the perfect organizations have advanced previous the inflexible model of this binary. Principal engineer roles, staff-plus IC paths, player-coach fashions, and technical program administration buildings all exist exactly as a result of that clear separation failed beneath actual circumstances. Most senior practitioners in mature tech organizations perceive that efficient management on the VP degree requires ongoing technical credibility, not simply folks expertise and OKR fluency.
However the underlying intuition — that administration means transferring away from technical judgment, that “stepping again” is what skilled maturity seems to be like — persists broadly. It persists in how we coach high-potential managers. It persists within the unstated alerts organizations ship about what “government presence” requires. And it persists most visibly within the moments when a technically wonderful chief will get promoted and is quietly suggested to cease doing the factor that made them wonderful within the first place.
That sample was at all times imperfect. In an AI-driven surroundings, it has turn out to be actively counterproductive.
What makes AI totally different from earlier expertise shifts
Each main expertise transition produces some model of this debate. Leaders who got here up by means of the transition to cloud needed to resolve how a lot infrastructure depth to keep up. Leaders navigating cell needed to resolve whether or not to remain near the UX implications or delegate that completely to their groups. In most of these instances, a frontrunner might afford a studying lag of a 12 months or two. The methods matured, the patterns stabilized, and synthesized understanding from secondhand enter was finally “adequate” for strategic decision-making.
AI is totally different in at the very least three ways in which matter for a way leaders ought to calibrate their proximity to the work.
First, the speed of functionality change is genuinely quick relative to enterprise resolution cycles. The hole between what a mannequin might do once you final evaluated it and what it could do now — or what it does in a different way beneath a brand new configuration, a brand new model, or a brand new prompting method — may be vital sufficient to invalidate prior selections. Leaders who’re making platform bets, vendor commitments, or coverage calls primarily based on six-month-old firsthand information are, in lots of instances, working on outdated assumptions with out realizing it.
Second, the failure modes are refined in ways in which earlier expertise transitions weren’t. Infrastructure failures are normally seen. Downtime is measurable. A misconfigured AI software, against this, can fail silently — producing outputs that look believable, are acted upon, and are solely understood as unsuitable weeks or months later, if in any respect. Leaders who haven’t used the instruments themselves don’t develop the intuition to identify these failure modes. They’re depending on their groups each to come across them and to escalate them — a series that’s longer and extra fragile than it seems.
Third, the selections that AI forces are genuinely cross-domain in ways in which resist clear delegation. A choice about which AI instruments to standardize throughout a corporation seems to be like a procurement name. It’s really a product resolution, a workflow resolution, a safety resolution, a change-management resolution, and a expertise sign . Untangling these dimensions requires a degree of built-in judgment that’s laborious to develop — or to train — from a distance.
The leaders most accountable for AI outcomes — funds, threat, adoption — are sometimes the least uncovered to how the instruments really behave. That hole isn’t a management model selection. It’s a structural legal responsibility that compounds over time.
What the associated fee really seems to be like
The price of technical disconnection on the management degree hardly ever publicizes itself. It doesn’t appear to be a missed deadline or a failed deployment. It tends to appear to be a sequence of small, cheap selections that accumulate right into a strategic place that nobody fairly supposed.
It seems to be like a platform standardization resolution made on the idea of a vendor demo and a crew abstract — the place the software that gained the analysis performs properly in managed circumstances and constantly underperforms within the edge instances that make up 40 p.c of precise work. Nobody lied. The analysis was cheap. However nobody within the room had spent sufficient time within the friction to know what inquiries to ask.
It seems to be like a threat posture that was calibrated to a mannequin’s habits on the time of the safety evaluate, then silently drifted because the mannequin was up to date, new capabilities had been enabled, or adjoining instruments had been built-in in methods the unique evaluate didn’t anticipate. The danger operate did its job. The hole is that nobody with accountability for the end result had the firsthand context to note the drift.
It seems to be like a change-management method designed across the assumption that the instruments are intuitive — as a result of the chief sponsoring the rollout discovered them intuitive — that runs into vital resistance from practitioners working in contexts the chief hasn’t tried to make use of the instruments in.
These aren’t failures of intelligence or effort. They’re failures of proximity. And in a fast-moving surroundings, proximity gaps compound quicker than they used to.
What “staying technical” really means on the VP degree
Staying hands-on as a senior chief will not be an argument for micromanagement. It’s not a declare that leaders must be doing particular person contributor work, competing with their groups for execution credit score, or inserting themselves into selections that belong at decrease ranges. The aim will not be technical heroics. It’s technical proximity — being shut sufficient to the precise habits of the methods you might be accountable for to train sound judgment about them.
In follow, for me, that proximity exhibits up as three distinct behaviors.
The primary is utilizing the instruments personally, in actual workflows, frequently.
Not in a demo surroundings. Not in a structured analysis train. In precise work — the sort of work the place you’ve gotten an actual deadline, an actual output you care about, and actual penalties if the software fails. The failure modes that matter in enterprise AI hardly ever floor in managed shows. They floor once you’re making an attempt to provide one thing actual and the software does one thing surprising: hallucinates a assured however unsuitable reply, degrades in high quality when the context window fills, produces output that’s technically right however structured in a method that creates downstream issues. These are the issues that practitioners in your crew are navigating day-after-day. When you’ve got by no means encountered them your self, you can’t totally recognize the hole between “the software works” and “the software works properly sufficient to construct on.”
The second is being current — not peripheral — when key tradeoffs are made.
There’s a particular sort of assembly that appears, on the agenda, like a vendor evaluate or a cost-optimization dialogue, and seems to be a call that can form your group’s AI posture for the following two years. Leaders who’re within the room for the abstract however not the deliberation usually don’t understand which assembly was which till later. Technical proximity means being engaged sufficient within the particulars that you may acknowledge when a dialog that appears like a tactical name is definitely a strategic one — and being current sufficient to form it when it issues.
A concrete instance from our personal work: we now have been constructing an AI functionality program segmented by persona — totally different instruments, entry ranges, and price buildings for various varieties of labor. On paper, it reads as a vendor rationalization train: consolidate overlapping instruments, handle spend, create tiered entry. In follow, the selections required realizing the place particular instruments fail silently for particular job capabilities, which personas have workflows that break in case you standardize on a lower-capability mannequin, and the place the friction value of a given software outweighs its effectivity profit. None of that was seen from a abstract. The judgment calls required firsthand context about software habits throughout totally different use instances — the sort you solely develop by really utilizing them.
The third is incomes the fitting to push again substantively.
When your crew flags a threat, identifies a constraint, or pushes again on a route, your capability to interact with that enter — fairly than merely accepting it or overriding it — depends upon having sufficient firsthand context to tell apart a real technical constraint from a consolation zone, an actual threat from an overstated one, a well-reasoned concern from a framing that displays a crew’s prior assumptions. Leaders with out technical proximity are pressured into one among two dangerous choices: rubber-stamp their crew’s judgment on all the pieces technical (which isn’t management), or override it with out adequate foundation (which is). The third possibility — partaking as a peer, asking the fitting questions, contributing knowledgeable perspective — is just out there in case you’ve finished the work to earn it.
The mindset beneath the habits
The behavioral dedication to technical proximity requires a selected sort of orientation — one which Satya Nadella articulated when he took over Microsoft because the shift from “know-it-all” to “learn-it-all.” The premise is that curiosity and the willingness to be a newbie, repeatedly and publicly, are extra sturdy management benefits than amassed experience. Experience has a shelf life. The disposition to maintain creating it doesn’t.
I feel that framing is true, however in an AI-driven surroundings it can not stay on the degree of philosophy or identification assertion. It has to indicate up as habits — particularly because the habits of selecting, repeatedly, to remain near the work even when your position would offer you permission to step again from it.
The leaders I’ve seen navigate AI transformation most successfully share a selected attribute: they’re keen to be visibly unsure in entrance of their groups. They struggle instruments in public, ask questions that reveal gaps of their understanding, and deal with their very own studying as a part of the organizational capability-building fairly than one thing to develop privately earlier than presenting a assured face. That posture — name it studying out loud — creates a unique sort of organizational permission. It alerts that not realizing one thing but will not be a management failure. The failure is in stopping the trouble to seek out out.
The more durable query: what are organizations signaling?
The people who have a tendency to interact with this sort of content material are, by choice, already curious and already oriented towards staying near the work. The more durable drawback will not be convincing them. It’s the organizational context they function in.
What alerts can we ship, as organizations, about what good management seems to be like in technical domains? If the implicit message — in how we coach, promote, and develop leaders — continues to be that administration means stepping again from the small print, we’re producing a technology of leaders who’re technically accountable for outcomes they don’t perceive properly sufficient to steer. We’re rewarding the looks of strategic pondering whereas quietly penalizing the hands-on engagement that makes that pondering grounded.
The corrective is partly cultural and partly structural. It seems to be like senior leaders modeling technical engagement publicly, not treating it as one thing they do privately. It seems to be like growth applications that construct technical literacy as a management competency, not as one thing that phases out after a sure degree. It seems to be like analysis standards that embody high quality of technical judgment alongside monetary efficiency and folks metrics.
And it seems to be like being keen to say, explicitly, that the “supervisor observe means stepping again” recommendation — whereas well-intentioned — is producing leaders who’re much less geared up for the second we are literally in.
What this second really calls for
The leaders who will carry out greatest within the subsequent three to 5 years aren’t those who managed their option to altitude and surveyed the panorama from a secure distance. They’re those who stayed shut sufficient to the work to know when the map now not matched the territory — and who had sufficient firsthand context to do one thing about it when it didn’t.
That requires a unique definition of management maturity than the one many people had been handed. Not the power to function from abstraction. The power to maneuver fluidly between abstraction and floor reality — to set route with out dropping contact with the fact that route has to outlive.
Within the age of AI, management will not be a step away from the work. It’s a sustained, deliberate dedication to understanding it properly sufficient to steer it — even because it retains altering.