Charismatic CEOs enjoy leading and inspiring people, so they don’t like delegating critical business decisions to smart algorithms. Who wants clever code bossing them around? But that future’s already arrived. At some of the world’s most successful enterprises — Google, Netflix, Amazon, Alibaba, Facebook — autonomous algorithms, not talented managers, increasingly get the last word. Elite MBAs (Management by Algorithm) are the new normal.

Executives dedicated to data-driven excellence accept the reality that smart algorithms need greater autonomy to succeed. Empowering algorithms is now as organizationally important as empowering people. But without clear lines of authority and accountability, dual empowerment guarantees perpetual conflict between human and artificial intelligence.

Computational autonomy requires that C-suites revisit the hows and whys of delegation. CEOs need to clarify when talented humans must defer to algorithmic judgment. That’s hard. The most painful board conversations that I hear about machine learning revolve around how much power and authority super-smart software should have. Executives who wouldn’t hesitate to automate a factory now flinch at the prospect of deep-learning algorithms dictating their sales strategies and capex. The implications of success scare them more than the risk of failure.

“Does this mean that all our procurement bids will be determined by machine?” asked one incredulous CEO of a multibillion euro business unit. Yes, that’s exactly what it meant. His group’s data science, procurement, and supply chain teams crafted algorithmic ensembles that, by all measures and simulations, would save hundreds of millions. Even better, they would respond 10 times faster to market moves than existing processes while requiring minimal human intervention. Top management would have to trust its computationally brilliant bidding software. That was the challenge. But the CEO wouldn’t — or couldn’t — pull the autonomy trigger.

“You need a Chief AI Officer,” Baidu chief scientist Andrew Ng told Fortune at January’s Consumer Electronics Show. (He explained why he thinks so in a recent HBR article.) Perhaps. But CEOs serious about confronting autonomy opportunity and risk should consider four proven organizational options. These distinct approaches enjoy demonstrable real-world success. The bad news: Petabytes of new data and algorithmic innovation assure that “autonomy creep” will relentlessly challenge human oversight from within.

The Autonomous/Autonomy Advisor

McKinsey, Bain, and BCG are the management models here. Autonomous algorithms are seen and treated as the best strategic advisors you’ll ever have, but they’re ones that’ll never go away. They’re constantly driving data-driven reviews and making recommendations. They both take initiative on what to analyze and brief top management with what they find. But only the human oversight committee approves what gets “autonomized” and how it is implemented.

In theory, the organizational challenges of algorithmic autonomy map perfectly to which processes or systems are being made autonomous. In reality, “handoffs” and transitions prove to be significant operational problems. The top-down approach invariably creates interpersonal and inter-process frictions. At one American retailer, an autonomous ensemble of algorithms replaced the entire merchandising department. Top management told store managers and staff to honor requests and obey directives from their new “colleagues”; the resentment and resistance were palpable. Audit software and human monitors were soon installed to assure compliance.

In this model, data scientists are interlocutors and ambassadors between the autonomy oversight committee and the targets of implementation. They frequently find the technologies are less of a hassle than the people. They typically become the punching bags and shock absorbers for both sides. They’re the ones tasked with blocking efforts to game the algorithms. Their loyalty and accountability belongs to top management.

The Autonomous Outsourcer

“Accenturazon” — part Accenture, part Amazon Web Services — is the managerial model here. Business process outsourcing becomes business process algorithmization. The same sensibilities and economic opportunities that make outsourcing appealing become managerial principles for computational autonomy.

That means you need crystal-clear descriptions and partitioning of both tasks to be performed and desired deliverables. Ambiguity is the enemy; crisply defined service level agreements and explicit KPI accountability are essential. Process and decision owners determine the resource allocations and whether autonomy should lead to greater innovation, optimization, or both. Predictability and reliability matter most, and autonomy is a means to that end.

As with traditional outsourcing, flexibility, responsiveness, and interoperability invariably prove problematic. The emphasis on defined deliverables subverts initiatives that might lead to autonomous-driven new value creation or opportunity exploration. The enterprise builds up a superior portfolio of effective autonomous ensembles but little synergy between them. Smarter C-suites architect their autonomous Accenturazonic initiatives with interoperability in mind.

Data scientists in business process algorithmization scenarios are project managers. They bring technical coherence and consistency to SLAs while defining quality standards for data and algorithms alike. They support the decision and process owners responsible for autonomy-enabled outcomes.

World-Class Challenging/Challenged Autonomous Employee

Even the most beautiful of minds can come with intrinsic limitations, and in that way algorithms resemble eccentric geniuses. Can typical managers and employees effectively collaborate with undeniably brilliant but constrained autonomous entities? In this enterprise environment, smart software is seeded wherever computational autonomy can measurably supplement, or supplant, desired outcomes. The firm effectively trains its people to hire and work with the world’s best and brightest algorithms.

The software is treated as a valued and valuable colleague that, more often than not, comes up with a right answer, if not the best one. Versions of this are ongoing at companies such as Netflix and Alibaba. But I cannot speak too highly of Steve Levy’s superb Backchannel discussion of how Google has committed to becoming a “machine learning first” enterprise.

“The machine learning model is not a static piece of code  —  you’re constantly feeding it data,” says one Google engineer. “We are constantly updating the models and learning, adding more data, and tweaking how we’re going to make predictions. It feels like a living, breathing thing. It’s a different kind of engineering.”

Comingling person/machine autonomy necessarily blurs organizational accountability. In such fast-changing learning environments, project and program managers can’t always know whether they will get better results from retraining people or retraining algorithms. That said, a culture of cocreation and collaboration becomes the only way to succeed.

Data scientists here facilitate. They’re analogous to autonomous resources, as opposed to human resources, departments. They do things like write chatbots and adopt Alexa-like interfaces to make collaboration and collegiality simpler and easier. They look to minimize discrimination, favoritism, and tension in person/machine relationships. C-suites depend on them to understand the massive cultural transformation pervasive autonomy means.

All-In Autonomy

Renaissance Technologies and other, even more secretive investment funds are the management models here. These organizations are fully committed to letting algorithmic autonomy take the enterprise to new frontiers of innovation, profitability, and risk. Their results should humble those who privilege human agency. Human leadership defers to demonstrable algorithmic power.

One quant designer at a New York hedge fund (that trades more in a week than a Fortune 250 company makes in a year) confided: “It took years for us to trust the algorithms enough to resist the temptation to override them….There are still [occasional] trades we won’t make and [not doing them] almost always costs us money.”

Firms look to leverage, amplify, and network autonomy into self-sustaining competitive advantage. They use machine learning software to better train machine learning software. Machine learning algorithms stress-test and risk-manage other machine learning algorithms.

Autonomy is both the organizational and the operational center of gravity for innovation and growth. People are hired and fired based on their abilities to push the algorithmic boundaries of successful autonomy.

Leadership in these organization demands humility and a willingness to convert trust in numbers into acts of faith. Academic computational finance researchers and fund managers alike tell me their machines frequently make trades and investments that the humans literally and cognitively do not understand. One of the hottest research areas in deep learning is crafting meta-intelligence software that generate rationales and narratives for explaining data-driven machine decision to humans.

Risk management and the imperative to acquire accessible human understanding of complex autonomy dominates data science for all-in enterprises.

Admittedly, these four managerial models deliberately anthropomorphize autonomous algorithms. That is, the software is treated not as inanimate lines of code but as beings with some sort of measurable and accountable agency. In each model C-suites rightly push for greater transparency and accessibility into what makes them tick. Greater oversight will lead to greater insight as algorithmic autonomy capabilities advance.

CEOs and their boards need to monitor that closely. They also need to promote use cases, simulations, and scenarios to stress-test the boundary conditions for their autonomous ensembles.

CEOs and executive leaderships should be wary of mashing up or hybridizing these separate approaches. The key to making them work is to build in accountability, responsibility, and outcomes from the beginning. There must be clarity around direction, delegation, and deference.

While that maxim is based on anecdotal observation and participation, not statistical analysis, never underestimate how radical shifts in organizational power and influence can threaten self-esteem and subvert otherwise professional behavior. That’s why CEOs should worry less about bringing autonomy to heel than making it a powerful source and force for competitive advantage.

Without question, their smartest competitors will be data-driven autonomous algorithms.