When Ends Illuminate Means (or, Saving Humanity from the Terminator)

I recently experienced the whiplash of traveling from the serenity of rural Wisconsin to the frenetic pace of Midtown Manhattan and found myself standing in Grand Central Terminal imagining De Niro and Grodin in Midnight Run and marveling at the fact that I was one of around 750,000 people that would pass through that day. It cost me ten bucks to travel from there to the JFK airport, which is basically a miracle.

My trip to NYC was for a conference on higher education thanks to my dear friend, Novita, whose technology group hosted the event. Conference attendees were mostly tech leaders at colleges and universities alongside vendors from the tech industry, and you might not be surprised that a major topic of conversation was “not” the trains at Grand Central. No, Artificial Intelligence (“AI”) was the topic de jour.

My primary relationship with AI had been jokes about how typing Capital A, Capital I (for Artificial Intelligence) in many typical fonts looks exactly like Capital A, lowercase l (for my name, Al), leading to all sorts of fun headlines for me personally, like “How Al Is Changing the Music Business,” and “Stop Talking About Al.” When it comes to new technology, I am intentionally a late adopter. I recognize that the world changes and that I must adapt to remain engaged, but I am critical of our collective tendency to jump at the new and shiny without thinking, so I choose to arrive fashionably late to the party.

But the conference conversations were timely for me as I reluctantly board the AI train. I heard multiple people quote a leader of the tech giant, NVIDIA, who reportedly said, “No, AI is not going to take your job. Someone who knows AI is going to take your job.” That will catch your attention (although it is still funny if you read that inserting my name instead). And I was struck by a side conversation where a couple of high tech leaders said that the very developers of AI are shocked by the speed of its development. That is actually frightening.

My perspective is that as with most things AI is neutral on its face with both good and bad potential. And yet I also identify with the camp that meets this particular technology with great apprehension. I should explain my perspective on the latter.

I feel like a broken record referring to Jacques Ellul and his prophetic 1954 book, The Technological Society, as much as I do, but Ellul’s warning about “ever-increasing means” toward “carelessly-examined ends” seems on steroids when it comes to AI. Not only are the means much more powerful and increasing much more rapidly than ever before, but also the conversation on ends is nonexistent, at least to my knowledge. It is my understanding that the developers aren’t even sure where the technology is headed, much less is our society engaging thoughtful conversations on where society is going to arrive given its current trajectory.

If it helps, I don’t just repeatedly refer to Ellul; I also refer to Jim Collins’s classic book, Good to Great, over and over and over. Good to Great examined companies that made the leap referred to in its title and shared lessons on how that occurred. I recently made a connection between the book and the AI Revolution. In Good to Great, Collins coined the term “Hedgehog Concept” and described it as identifying the one thing in the world that your business can do the very best and then described the “Flywheel Effect” as staying laser-focused on that one thing until the momentum builds to that breakthrough moment for greatness. Important stuff, but I had almost forgotten that Collins had a section on technology, too, and I had almost forgotten because he made the crucial observation that technology should never be the point; instead, technology should at most be a tool that accelerates your laser-focused work on the one thing that is the most important for your business.

This is ridiculously important right now, I believe. While I am fully convinced that society as a whole will not engage a conversation on desired outcomes, maybe you and I in our respective spheres of influence can fight the powerful head winds against us to determine with specificity sufficient for clarity on what we want our lives to look like someday (i.e., the ends) and then with desperation cling to that destination. If AI/tech can be useful to accelerate our journey to our worthy goals, then by all “means” (ha!), use it. But if not, do not get sucked into its powerful and seductive vortex.

I have long heard the saying that the ends do not justify the means, and that’s true in communicating that immoral or unethical behavior is still wrong even if it produces something good. But what I am trying to communicate today is that tenaciously establishing the ends first will help illuminate the means and allow you to banish all unhelpful distractions to the shadows. Put another way, establishing noble ends first illuminates the means that are worthy tools for achieving the noble cause.

Okay, that doesn’t have the ring of a future cliche to it, but I believe it reduces the likelihood that a cyborg devours our souls for lunch someday.

Leave a comment