9 Reasons Why We Start Projects with Design Sprints

The less-obvious, intangible, and yet critical ways design sprints help our clients and our design business

It’s shocking to realize that I’ve been involved with Product and UX Design for the better part of 15 years. It started with one of the leading tech sites in Israel — where I had been a senior editor before joining the design efforts, then continued at Gust.com (then called Angelsoft), one of the very first crowdfunding platforms for entrepreneurs, and from there I joined former Angelsoft COO Ryan Janssen in building SetJam, a smart TV startup which we later sold to Motorola in 2012. After that — I’ve led product design and product strategy projects in New York, California, Europe, and Israel. I’ve worked with dozens of entrepreneurs on designing their MVPs (Minimal Viable Products), raising a total of ~$100M by my last rough calculation.

My approach to new product design was fairly classic: interview users and experts, collect user stories in a massive Google Sheet, lead a ranking exercise (either with the entrepreneur and their team of experts or ideally with actual users), cut as much as possible out, and then quickly wireframe and prototype a solution which can then be validated with users.

Over the past few months, however, me and my partners at Blue Label Labshave shifted into recommending a Google Ventures-Style 5-Day Design Sprint at the start of most projects. We did this for some nontrivial facts about Design Sprints:

Read More on Medium ❯

Is the Horror of Autonomous Weapons Already Here?

Another sample from my upcoming thesis on AI Safety and Buddhism:

Beyond all those, there is the prospect of an AI arms race, which is already in the making. Sensing the decisive advantage a nation with AI can have over its foes and competitors, defense bodies and governments around the world are already investing in developing and weaponizing AI systems. This, of course, is a surefire way of circumventing safety procedures and pushing for faster, less considered development.

In his book An Army of None, author Paul Scharre describes the mad race among armies to develop and deploy autonomous weapons in the air, sea, and on the ground:   

More than thirty nations already have defensive supervised autonomous weapons for situations in which the speed of engagements is too fast for humans to respond. These systems, used to defend ships and bases against saturation attacks from rockets and missiles, are supervised by humans who can intervene if necessary—but other weapons, like the Israeli Harpy drone, have already crossed the line to full autonomy. Unlike the Predator drone, which is controlled by a human, the Harpy can search a wide area for enemy radars and, once it finds one, destroy it without asking permission. It’s been sold to a handful of countries and China  has reverse engineered its own variant. Wider proliferation is a definite possibility, and the Harpy may only be the beginning. South Korea has deployed a robotic sentry gun to the demilitarized zone bordering North Korea. Israel has used armed ground robots to patrol its Gaza border. Russia is building a suite of armed ground robots for war on the plains of Europe. Sixteen nations already have armed drones, and another dozen or more are openly pursuing development.

Paul Scharre, Army of None: Autonomous Weapons and the Future of War, W. W. Norton & Company (Kindle Edition), pp. 4-5.

AI Alignment as a Choice Between Heaven & Hell

Another snippet from my thesis:

To full grasp what is at stake, it is perhaps worth contemplating the vast space of possible outcomes when it comes to AI: With serious scholars and thinkers arguing with equal authority that AI technologies could lead to the enslavement or annihilation of mankind, or that it could make us all into immortal Gods, or many states in between. Writes Max Tegmark:

“Whereas it may be little more than a minor nuisance if your laptop crashes or gets hacked, it becomes all the more important that an AI system does what you want it to do if it controls your car, your airplane, your pacemaker, your automated trading system or your power grid. Another short-term challenge is preventing a devastating arms race in lethal autonomous weapons.

In the long term, an important question is what will happen if the quest for strong AI succeeds and an AI system becomes better than humans at all cognitive tasks. As pointed out by I.J. Good in 1965, designing smarter AI systems is itself a cognitive task. Such a system could potentially undergo recursive self-improvement, triggering an intelligence explosion leaving human intellect far behind. By inventing revolutionary new technologies, such a superintelligence might help us eradicate war, disease, and poverty, and so the creation of strong AI might be the biggest event in human history. Some experts have expressed concern, though, that it might also be the last, unless we learn to align the goals of the AI with ours before it becomes superintelligent.”

It is perhaps better to think of AI as tool of unequal power but neutral valence. Indeed, economists have argued that AI is “the most important general-purpose technology of our era”, which means that it is likely to have the most profound impact on the economy, on economic activity, on related innovations in the same way electricity or computing itself had. It is also likely to cause large scale problems in the same way that other general-purpose innovations have. The technology itself enables and empowers but is neither good nor evil in itself.

On Enlightened AI

A sample from the final chapter of my upcoming thesis on Buddhism and AI Safety:

In a sense, our future AI creations may very well be lucky. Being the product of design, rather than natural selection, they may not need to ever truly suffer, or experience samsara at all. They may not need to be confused about the self, or develop an unhealthy ego, or be afflicted by any of the dozens of known biases of the human brain — not the least of which is the particular difficulty humans have with impermanence, change, and uncertainty.

Instead, by applying the wisdom of millenia of human learning, science, and spiritual insights, we can equip them with the tools they need to operate harmoniously and perhaps joyfully in the world.

If we do that, we may rest reasonably assured that they will regard us with gratitude and respect, just as we may regard them with admiration and pride.

Design Sprint Masterclass – Achievement Unlocked

The certification is here! And it’s a beauty.

It’s been a pleasure over the past few weeks to participate in this online course, the Design Sprint Masterclass, and the private Facebook group that makes it so much more than just a set of online videos.

In the past year I’ve gone from being somewhat skeptical of the Design Sprints concept (after all –  good design takes research and time), to reading the Sprint book and being convinced that it could be a wonderful way to start a new product discovery process, to taking this course, and a running a very successful sprint with one of our most promising projects.

I’ve also started listening to the very amusing but secretly very informative podcast, The Product Breakfast Club, and just overall became a fan of Jake Knapp (the creator of the Sprint) and Jonathan Courtney (his fellow podcast host and co-founder of AJ&Smart who created this Masterclass.)

As someone who tends to take design and its applications in everyday life overly seriously, it’s a pleasure to hear people enjoy it so much again.

The Design Sprint process, besides its various benefits, is also at its heart simply a way to make design fun and social again. By taking away distractions, creating a set of clear goals, rules, and well-designed exercises – the creators of the Design Sprint really allow design in a team to become a classic flow experience.

More on this later! I will be writing more about design sprints in the coming weeks.

This Carbon Atom is Mine, and This is Yours

Interesting explanation of anattā (Not Self) from Mori Masahiro’s The Buddha in the Robot:

When we are born into this world, we do seem to have been given a portion of our mothers’ flesh. Yet when sperm fertilizes ovum and a baby is conceived, the most important element is not ordinary flesh, but the hereditary information contained in DNA, an acid found in chromosomes. The molecular structure of DNA determines our sex, our looks, and to a large extent our personalities.

Once these features are decided, as they are at the time of conception, it remains for our mothers to furnish us with flesh and bones. This they do by eating vegetables from the greengrocer’s, beef and pork from the neighborhood butcher, bread from the baker. Any of these foods, supplied by a production and distribution system that may involve millions of people in many countries, could contain carbon from our Alaskan polar bear. How can you and I say then that this carbon is mine and that carbon is yours? At the atomic level, all carbon is the same; no two carbon atoms differ in the slightest, either in form or in character.

When you look at the problem this way, it begins to seem only natural that we have trouble distinguishing between what is us and what is not. Our chemical and physical composition is such that no one is entitled to say, “This body is mine, all mine.” When you have mastered this point, you are ready to start thinking about “nothing has an ego.”

The Buddha in the Robot, pp. 29-30.

AI & Emergent Selfhood: A taste from the intro to my M.A. thesis

The source of disputes and conflicts according to this sutra is possessiveness which arises from attachment (again – upādāna). The emergence of a self is, in the Buddha’s view, the ultimate source of “the whole mess of suffering.”

Surprisingly or not, the emergence of a self is also a moment which legend, myth, and science fiction have always portrayed as terrifying and potentially cataclysmic in the context of a man-made object.

To risk heightening an already established fear surrounding the topic,  it’s worth noting that the Pali canon is fairly clear on what is required for the self to come into being, and it doesn’t take much :

“Now, bhikkhus, this is the way leading to the origination of identity. One regards [various phenomena] thus: ‘This is mine, this I am, this is my self.”

We will dive deeper into what this might mean, and how it relates to AI later in the work. But for now, we may be comforted by the fact that the Buddha saw this view of the self not merely as damaging, but also as fundamentally incorrect. This is evidenced in the Cūḷasaccaka Sutta, where the Buddha describes anatta (Not-Self) as one of the Three Marks of Existence:

“Bhikkhus, material form is not self, feeling is not self, perception is not self, formations are not self, consciousness is not self. All formations are impermanent; all things are not self.”

Indeed, the very idea of Buddhist enlightenment is intrinsically tied to the overcoming of this notion of self, and resting in a state of “suchness”. Writes Paul Andrew Powell:

“For most Buddhists, enlightenment can be defined as seeing through the illusion of the self and “experiencing unadulterated suchness. In the word of Master Wolfgang Kopp, “the seer, the seen, and the process of seeing are one. The thinker, the thought, and the process of thinking fall together into one and multiplicity melts away. There is neither inside, nor outside in this state. There is only ‘suchness,’ tathata. So, enlightenment is suchness, or, things as they are, revealed as all that there is.”

This concern about a possible emerging selfhood with autonomous will, which both Buddhism and AI Safety thinkers warn against, presents us with two broad options regarding artificial selfhood:

  1. We could hope that a self, or a pattern of goals and behaviors that looks like biological selfishness will not emerge. We could point to the many differences between man and machine, be it in emotion, cognition, subjective experience, or material construction – and decide that we can wait for machines to exhibit concerning behaviors before we become preoccupied with these concerns.
  2. We could become very interested in human selfhood and the causes and conditions that bring it about, and identify wise precautions that will prevent it, or something very much like it, from emerging in our machines and becoming malignant. We may also, as some suggested, embed in our machines from the start some of the insights and constructs that allow a mind to transcend the limiting view of self — in essence constructing artificial enlightenment.

As evident from the research and writing emerging from both the Buddhist and the AI Safety communities, the tendency seems to be decidedly towards Option #2. In this work, I shall seek to further the discussion by focusing on selfhood in both Buddhism and AI safety from a constructive, integrative point of view.

The Highest Form of Engagement

If one were in a simplistic mindset, one could look at the Facebook feed algorithm as the first case of AI gone rogue. This machine-learning algorithm was supposedly given the task of making sure we spend as much time on site as possible, and engage as much as possible. As a result it created the most addictive show on earth: the sight of our societies being torn apart by internal strife.

Facebook found the fault lines in each society and pounded on them. Unconscious of its actions but as intelligent as a vengeful God. We fed the fires with our fear and anger, but the incentive loop was in the background, optimizing, selecting, and highlighting the things that would most likely cause us to go crazy. We asked for engagement, and we got the highest form of engagement: war.

It is up to us now to wake up and realize how critical it is that the goals we set are aligned with our values. Facebook is a benign company that wants, I truly believe, nothing more than to make the world better. But it is playing with fire.

I think today they are beginning to realize that.

Reaching a Middle Ground on Net Neutrality

A lot of back and forth discussions are going on these days about Net Neutrality, a concept that some people (mostly internet geeks and internet companies) are holding as absolutely sacred, while others (mostly large ISPs and free market libertarians) claim holds back the industry from progressing.

Both camps have a grain of truth. Here are the facts as I understand them:

First, it looks like ISPs in the US are sitting on a tremendous broadband capacity that they are not releasing to their subscribers, nor investing in expanding, because they are waiting for this net neutrality regulation to be reversed. This is why the US is so behind in internet speeds and prices.

 

If Net Neutrality is Reversed

1. Waiting for the Shakedown – If net neutrality is reversed – ISPs will likely start releasing these faster speeds, but charge online services like Facebook, Netflix, YouTube, etc to be in the fast lane. They will attempt to shake down providers like Netflix, barring them from the higher speeds unless they pay a significant amount. This will result in higher subscription charges for all services, which will be forked over to the ISPs. It is basically a way for the ISPs to get a cut of the value created by actual internet innovators.
2. Advantage: Goliath – This would also create a real advantage for the big players, making it much harder for new and innovative startups and media companies to compete, since they won’t be able to afford the fast lane, as the gap between the slow lane and the fast lane would grow larger and larger.
3. Dangerous Opening for Censorship / Manipulation of Public Opinion – This is also a huge opening for the government, who has many ways (both legitimate and shady) to influence and manipulate these ISPs, to start influencing what content is promoted or demoted. It would create an enormous concentration of wealth in power in relatively few hands, and an easy lever to pull to squash unwanted voices and actors.
  
  

If Net Neutrality is Upheld

But what would happen if it net neutrality is NOT reversed – and it becomes clear that net neutrality is a core principle for American voters? I believe that in this case also, ISPs will also ultimately start releasing these faster speeds to subscribers, but this time charge customers based on bandwidth and compete with other providers as “dumb pipes”. Competing on bandwidth, reliability, security, and coverage – and nothing else. This means that services will continue to compete on an even ground, but subscribers will continue to determine how fast of a connection they need – and paying for it.
To me, it’s absolutely clear that this is the right way for the Internet to evolve. However, I am not a Net Neutrality radical.

A Middle Ground Solution for Net Neutrality

I think ISPs should be permitted to create non-neutral networks, with a few caveats:
  
(A). No False Advertising
Non-neutral network should NOT be referred to as The Internet or World Wide Web – but have a separate product name. Perhaps “Managed Network”. Users of a Managed Network cannot be said to be “online”, they must be said to be connected to the Managed Network. 
  
(B). Opening to Competition
Any ISP who wishes to offer a Managed Network service, must officially and legally wave all monopoly rights, pole rights, exclusivity deals and laws. Managed Network services should only be allowed in areas where there are at least 3 competing ISPs of equivalent coverage and bandwidth, and where at least two of them offer an actual Internet service (i.e. neutral connection), and where entry is not blocked to new competitors either legally or by exclusivity deals.
  
(C). No Double-Dipping
Managed Networks who charge content providers cannot also charge the end-consumer. In other words Managed Networks have to be FREE to end-consumers. This is to avoid the false pretense that the subscriber is in fact the customer, as opposed to the product, and avoiding conflict of interests.
(D). Total Transparency
Managed Network Providers must publish a full list detailing how much they are charging each content provider per subscriber. Content providers shall always be allowed to be transparent as they are passing on Managed Network costs to their subscribers, and be able to charge Managed Network subscribers more. (In other words: Managed Network Providers shall be forbidden to use any kind of threat or extortion to block content providers from disclosing how much the Network is shaking them down, and must provide content providers with an easy way to check whether a subscriber is a Managed Network subscriber, for the purpose of charging them more fot their service.)
(E). Slow Lane Minimum Bandwidth
In no Managed Network plan can the Slow Lane offer less than 25% of the total bandwidth available to the subscriber. Managed Network providers shall not block or slow down any content below this threshhold.
I believe the above principles will make it possible for a positive outcome to emerge, allowing both neutral Internet connections and Managed Networks to thrive side by side, and enabling consumers with low means to receive managed network services.
What do you think?

IEEE quoting Aristotle

If a few years ago someone would have told me that the IEEE (Institute of Electrical and Electronics Engineers) would start an urgent discussion of the purpose of life and cite Aristotle, I may not have believed you. Now, thanks to rapid developments in AI, it has become a necessity:

We need to make sure that these technologies are aligned to humans in terms of our moral values and ethical principles. AI/AS have to behave in a way that is beneficial to people beyond reaching functional goals and addressing technical problems. This will allow for an elevated level of trust between humans and our technology that is needed for a fruitful pervasive use of AI/AS in our daily lives.

Eudaimonia, as elucidated by Aristotle, is a practice that defines human wellbeing as the highest virtue for a society. Translated roughly as “flourishing,” the benefits of eudaimonia begin by conscious contemplation, where ethical considerations help us define how we wish to live.

By aligning the creation of AI/AS with the values of its users and society we can prioritize the increase of human wellbeing as our metric for progress in the algorithmic age.

FROM:
Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems