9 Reasons Why We Start Projects with Design Sprints

The less-obvious, intangible, and yet critical ways design sprints help our clients and our design business

It’s shocking to realize that I’ve been involved with Product and UX Design for the better part of 15 years. It started with one of the leading tech sites in Israel — where I had been a senior editor before joining the design efforts, then continued at Gust.com (then called Angelsoft), one of the very first crowdfunding platforms for entrepreneurs, and from there I joined former Angelsoft COO Ryan Janssen in building SetJam, a smart TV startup which we later sold to Motorola in 2012. After that — I’ve led product design and product strategy projects in New York, California, Europe, and Israel. I’ve worked with dozens of entrepreneurs on designing their MVPs (Minimal Viable Products), raising a total of ~$100M by my last rough calculation.

My approach to new product design was fairly classic: interview users and experts, collect user stories in a massive Google Sheet, lead a ranking exercise (either with the entrepreneur and their team of experts or ideally with actual users), cut as much as possible out, and then quickly wireframe and prototype a solution which can then be validated with users.

Over the past few months, however, me and my partners at Blue Label Labshave shifted into recommending a Google Ventures-Style 5-Day Design Sprint at the start of most projects. We did this for some nontrivial facts about Design Sprints:

Read More on Medium ❯

Steve Jobs would not approve of how we use the iPhone.

From the New York Times:

“Smartphones are our constant companions. For many of us, their glowing screens are a ubiquitous presence, drawing us in with endless diversions, like the warm ping of social approval delivered in the forms of likes and retweets, and the algorithmically amplified outrage of the latest “breaking” news or controversy. They’re in our hands, as soon as we wake, and command our attention until the final moments before we fall asleep.

Steve Jobs would not approve.”

Read More

Is the Horror of Autonomous Weapons Already Here?

Another sample from my upcoming thesis on AI Safety and Buddhism:

Beyond all those, there is the prospect of an AI arms race, which is already in the making. Sensing the decisive advantage a nation with AI can have over its foes and competitors, defense bodies and governments around the world are already investing in developing and weaponizing AI systems. This, of course, is a surefire way of circumventing safety procedures and pushing for faster, less considered development.

In his book An Army of None, author Paul Scharre describes the mad race among armies to develop and deploy autonomous weapons in the air, sea, and on the ground:   

More than thirty nations already have defensive supervised autonomous weapons for situations in which the speed of engagements is too fast for humans to respond. These systems, used to defend ships and bases against saturation attacks from rockets and missiles, are supervised by humans who can intervene if necessary—but other weapons, like the Israeli Harpy drone, have already crossed the line to full autonomy. Unlike the Predator drone, which is controlled by a human, the Harpy can search a wide area for enemy radars and, once it finds one, destroy it without asking permission. It’s been sold to a handful of countries and China  has reverse engineered its own variant. Wider proliferation is a definite possibility, and the Harpy may only be the beginning. South Korea has deployed a robotic sentry gun to the demilitarized zone bordering North Korea. Israel has used armed ground robots to patrol its Gaza border. Russia is building a suite of armed ground robots for war on the plains of Europe. Sixteen nations already have armed drones, and another dozen or more are openly pursuing development.

Paul Scharre, Army of None: Autonomous Weapons and the Future of War, W. W. Norton & Company (Kindle Edition), pp. 4-5.

AI Alignment as a Choice Between Heaven & Hell

Another snippet from my thesis:

To full grasp what is at stake, it is perhaps worth contemplating the vast space of possible outcomes when it comes to AI: With serious scholars and thinkers arguing with equal authority that AI technologies could lead to the enslavement or annihilation of mankind, or that it could make us all into immortal Gods, or many states in between. Writes Max Tegmark:

“Whereas it may be little more than a minor nuisance if your laptop crashes or gets hacked, it becomes all the more important that an AI system does what you want it to do if it controls your car, your airplane, your pacemaker, your automated trading system or your power grid. Another short-term challenge is preventing a devastating arms race in lethal autonomous weapons.

In the long term, an important question is what will happen if the quest for strong AI succeeds and an AI system becomes better than humans at all cognitive tasks. As pointed out by I.J. Good in 1965, designing smarter AI systems is itself a cognitive task. Such a system could potentially undergo recursive self-improvement, triggering an intelligence explosion leaving human intellect far behind. By inventing revolutionary new technologies, such a superintelligence might help us eradicate war, disease, and poverty, and so the creation of strong AI might be the biggest event in human history. Some experts have expressed concern, though, that it might also be the last, unless we learn to align the goals of the AI with ours before it becomes superintelligent.”

It is perhaps better to think of AI as tool of unequal power but neutral valence. Indeed, economists have argued that AI is “the most important general-purpose technology of our era”, which means that it is likely to have the most profound impact on the economy, on economic activity, on related innovations in the same way electricity or computing itself had. It is also likely to cause large scale problems in the same way that other general-purpose innovations have. The technology itself enables and empowers but is neither good nor evil in itself.

On Enlightened AI

A sample from the final chapter of my upcoming thesis on Buddhism and AI Safety:

In a sense, our future AI creations may very well be lucky. Being the product of design, rather than natural selection, they may not need to ever truly suffer, or experience samsara at all. They may not need to be confused about the self, or develop an unhealthy ego, or be afflicted by any of the dozens of known biases of the human brain — not the least of which is the particular difficulty humans have with impermanence, change, and uncertainty.

Instead, by applying the wisdom of millenia of human learning, science, and spiritual insights, we can equip them with the tools they need to operate harmoniously and perhaps joyfully in the world.

If we do that, we may rest reasonably assured that they will regard us with gratitude and respect, just as we may regard them with admiration and pride.

Design Sprint Masterclass – Achievement Unlocked

The certification is here! And it’s a beauty.

It’s been a pleasure over the past few weeks to participate in this online course, the Design Sprint Masterclass, and the private Facebook group that makes it so much more than just a set of online videos.

In the past year I’ve gone from being somewhat skeptical of the Design Sprints concept (after all –  good design takes research and time), to reading the Sprint book and being convinced that it could be a wonderful way to start a new product discovery process, to taking this course, and a running a very successful sprint with one of our most promising projects.

I’ve also started listening to the very amusing but secretly very informative podcast, The Product Breakfast Club, and just overall became a fan of Jake Knapp (the creator of the Sprint) and Jonathan Courtney (his fellow podcast host and co-founder of AJ&Smart who created this Masterclass.)

As someone who tends to take design and its applications in everyday life overly seriously, it’s a pleasure to hear people enjoy it so much again.

The Design Sprint process, besides its various benefits, is also at its heart simply a way to make design fun and social again. By taking away distractions, creating a set of clear goals, rules, and well-designed exercises – the creators of the Design Sprint really allow design in a team to become a classic flow experience.

More on this later! I will be writing more about design sprints in the coming weeks.

Reality is Connections

From my masters thesis in progress:

We’ve already seen that the Connectionist approach in Cognitive Science views knowledge as in some way made of connections. But the Buddhist negation of svabhava goes further than that – it sees reality itself as made of connections. Everything in this world is made of, dependent on, and actually is – other things. Reality is an interdependent, ever-changing, self-reflecting and echoing, infinitely self-referencing whole.

It is for this reason, fundamentally, that Buddhism suspects symbolic language: by its very nature language separates, defines, and confines — but in reality every reference is self-reference, every definition is incomplete or circular, every pointing finger points also at itself.

 

This is from a chapter I’m writing about building wisdom into artificially intelligent machines.

 

This Carbon Atom is Mine, and This is Yours

Interesting explanation of anattā (Not Self) from Mori Masahiro’s The Buddha in the Robot:

When we are born into this world, we do seem to have been given a portion of our mothers’ flesh. Yet when sperm fertilizes ovum and a baby is conceived, the most important element is not ordinary flesh, but the hereditary information contained in DNA, an acid found in chromosomes. The molecular structure of DNA determines our sex, our looks, and to a large extent our personalities.

Once these features are decided, as they are at the time of conception, it remains for our mothers to furnish us with flesh and bones. This they do by eating vegetables from the greengrocer’s, beef and pork from the neighborhood butcher, bread from the baker. Any of these foods, supplied by a production and distribution system that may involve millions of people in many countries, could contain carbon from our Alaskan polar bear. How can you and I say then that this carbon is mine and that carbon is yours? At the atomic level, all carbon is the same; no two carbon atoms differ in the slightest, either in form or in character.

When you look at the problem this way, it begins to seem only natural that we have trouble distinguishing between what is us and what is not. Our chemical and physical composition is such that no one is entitled to say, “This body is mine, all mine.” When you have mastered this point, you are ready to start thinking about “nothing has an ego.”

The Buddha in the Robot, pp. 29-30.

AI & Emergent Selfhood: A taste from the intro to my M.A. thesis

The source of disputes and conflicts according to this sutra is possessiveness which arises from attachment (again – upādāna). The emergence of a self is, in the Buddha’s view, the ultimate source of “the whole mess of suffering.”

Surprisingly or not, the emergence of a self is also a moment which legend, myth, and science fiction have always portrayed as terrifying and potentially cataclysmic in the context of a man-made object.

To risk heightening an already established fear surrounding the topic,  it’s worth noting that the Pali canon is fairly clear on what is required for the self to come into being, and it doesn’t take much :

“Now, bhikkhus, this is the way leading to the origination of identity. One regards [various phenomena] thus: ‘This is mine, this I am, this is my self.”

We will dive deeper into what this might mean, and how it relates to AI later in the work. But for now, we may be comforted by the fact that the Buddha saw this view of the self not merely as damaging, but also as fundamentally incorrect. This is evidenced in the Cūḷasaccaka Sutta, where the Buddha describes anatta (Not-Self) as one of the Three Marks of Existence:

“Bhikkhus, material form is not self, feeling is not self, perception is not self, formations are not self, consciousness is not self. All formations are impermanent; all things are not self.”

Indeed, the very idea of Buddhist enlightenment is intrinsically tied to the overcoming of this notion of self, and resting in a state of “suchness”. Writes Paul Andrew Powell:

“For most Buddhists, enlightenment can be defined as seeing through the illusion of the self and “experiencing unadulterated suchness. In the word of Master Wolfgang Kopp, “the seer, the seen, and the process of seeing are one. The thinker, the thought, and the process of thinking fall together into one and multiplicity melts away. There is neither inside, nor outside in this state. There is only ‘suchness,’ tathata. So, enlightenment is suchness, or, things as they are, revealed as all that there is.”

This concern about a possible emerging selfhood with autonomous will, which both Buddhism and AI Safety thinkers warn against, presents us with two broad options regarding artificial selfhood:

  1. We could hope that a self, or a pattern of goals and behaviors that looks like biological selfishness will not emerge. We could point to the many differences between man and machine, be it in emotion, cognition, subjective experience, or material construction – and decide that we can wait for machines to exhibit concerning behaviors before we become preoccupied with these concerns.
  2. We could become very interested in human selfhood and the causes and conditions that bring it about, and identify wise precautions that will prevent it, or something very much like it, from emerging in our machines and becoming malignant. We may also, as some suggested, embed in our machines from the start some of the insights and constructs that allow a mind to transcend the limiting view of self — in essence constructing artificial enlightenment.

As evident from the research and writing emerging from both the Buddhist and the AI Safety communities, the tendency seems to be decidedly towards Option #2. In this work, I shall seek to further the discussion by focusing on selfhood in both Buddhism and AI safety from a constructive, integrative point of view.

The Highest Form of Engagement

If one were in a simplistic mindset, one could look at the Facebook feed algorithm as the first case of AI gone rogue. This machine-learning algorithm was supposedly given the task of making sure we spend as much time on site as possible, and engage as much as possible. As a result it created the most addictive show on earth: the sight of our societies being torn apart by internal strife.

Facebook found the fault lines in each society and pounded on them. Unconscious of its actions but as intelligent as a vengeful God. We fed the fires with our fear and anger, but the incentive loop was in the background, optimizing, selecting, and highlighting the things that would most likely cause us to go crazy. We asked for engagement, and we got the highest form of engagement: war.

It is up to us now to wake up and realize how critical it is that the goals we set are aligned with our values. Facebook is a benign company that wants, I truly believe, nothing more than to make the world better. But it is playing with fire.

I think today they are beginning to realize that.