For Venture Beat / June 2020
This is the unedited, original version
It feels senseless to talk about the latest tech gadget or tech startup when we consider how the world has been rightfully ignited by instant exposure to recent tragic acts of violence on Black humanity. But when faced with the opportunity to speak to a tech audience, I find it hard to turn down. Why? Because the tech world represents a relatively small community of people who know how to speak machine. And because we are digitally privileged, we embody the potential to automate more harm than good — or to automate more good than harm.
Right now is an especially important time to consider what tech privilege can and should be doing, and there’s plenty of important advice being shared right now online. My takeaways have been to give to anti-racism causes and to do the work of an ally regularly instead of episodically. That means staying awake at all times to fight against racism 24/7 which is easier, unfortunately, when tragedy occurs.
Honestly though, the only thing that can really run 24/7 is a computational machine. And when you throw in a Facebook-style engagement metric like DAU/MAU (Daily Active Users to Monthly Active Users), people who speak machine can assess the level of addictiveness that can be achieved in getting your attention. In a research paper by Facebook in 2013 it was revealed that experiments were performed by Facebook on users to see if their emotional state could be changed by showing different positive or negative stories. Keep in mind that this wasn’t an experiment where a human being did the work of picking up a piece of paper and shoving it into the field of view of another human being. It was instead a diligent robot that would work 24/7 to automagically deliver information into your direct line of sight on any screen that you’re using. There is no escape from any information that a computer wishes to show you unless you turn it off.
People who know how to speak machine are well aware that a robot that’s coded to show you pictures of things that give you a sense of disgust are likely to achieve high stickiness. Whereas instead it’s difficult to create a robot that can increase your understanding of a complex topic. As a believer in teachers I’d like to think that the perfect e-learning companion will never exist because both learning and teaching take a significant amount of efforts. When I was younger and found myself as a student at MIT and its intense undergraduate program, we called it like like “drinking from a firehose.” I get a little disturbed these days knowing how robots can now learn much quicker than we can with advances in machine learning — which are like feeding millions of firehoses of information into a machine brain. And that can happen 24/7 without the machine ever needing a bathroom break or a nap — because they don’t tire at all.
Machines learn fast. Humans learn slowly. Humans learn bad things fast. Machines learn fast from humans. Machines can spread bad human things fast. Machines can also spread good information quickly too. But that’s not in the best interests of the machine’s engagement scores because bad news is more tasty on social. The robots are programmed to re-share your distastes at HIGH VOLUME which begets more reactions that begets even greater reactions in pursuit of the desired jackpot of an obscene level of virality. It’s like standing at the mouth of a canyon as you let out a yodel, and for the echo and reverb to grow and grow until I need to yell I CAN’T HEAR YOU.
The unexpected moral outcome of our global casino of information has been the degree of transparency we now have with respect to acts of racism that can be shared instantly on video.
I like to think that good news travels by foot while bad news travels by jet plane. But we don’t get onto jet planes anymore in the post-pandemic era. And that might be a problem. Because not only does bad news travel at the full speed of the Internet, our attention is fixated on a screen-based world due to shelter in place.
The random interactions inherent to our physical world have vanished for the time being — which removes all possibilities to encounter random information that might have the chance of broadening our understanding. But for many who weren’t aware of the ugliness of racism, witnessing the tragic murder of George Floyd as a random delivery of information by a robot had the opposite effect of drawing us further into our screens. It moved us back into the real world which is the only place where we can demand change from our leaders with maximum effectiveness — even in such dangerous times as C-19. Everything happening right now points to an unexpected “bug” in the code of all the robots that have been designed to vacuum up all our attention into infinite loops of engagement. We’ve instead been (fortuitously) forced to become awake.
But I know that many of us will be pulled back into sleep again because the 24/7 tireless work of the robots can take us in directions that are out of our control.
So I wonder if the people who engineer the robots that control our minds and put us to sleep, could instead design robots that keep us awake? Is the only signal that can jolt us awake, and the algorithms, the loss of another Black life? The answer to this question matters for the future.
What are you making, my fellow speakers of machine?