IEEE quoting Aristotle

If a few years ago someone would have told me that the IEEE (Institute of Electrical and Electronics Engineers) would start an urgent discussion of the purpose of life and cite Aristotle, I may not have believed you. Now, thanks to rapid developments in AI, it has become a necessity:

We need to make sure that these technologies are aligned to humans in terms of our moral values and ethical principles. AI/AS have to behave in a way that is beneficial to people beyond reaching functional goals and addressing technical problems. This will allow for an elevated level of trust between humans and our technology that is needed for a fruitful pervasive use of AI/AS in our daily lives.

Eudaimonia, as elucidated by Aristotle, is a practice that defines human wellbeing as the highest virtue for a society. Translated roughly as “flourishing,” the benefits of eudaimonia begin by conscious contemplation, where ethical considerations help us define how we wish to live.

By aligning the creation of AI/AS with the values of its users and society we can prioritize the increase of human wellbeing as our metric for progress in the algorithmic age.

FROM:
Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems

One thought on “IEEE quoting Aristotle”

  1. Quick list of all the open questions in this fantastic whitepaper:

    • How can we ensure that AI/AS do not infringe human rights? (Framing the Principle of Human Rights)

    • How can we assure that AI/AS are accountable? (Framing the Principle of Responsibility)

    • How can we ensure that AI/AS are transparent? (Framing the Principle of Transparency)

    • How can we extend the benefits and minimize the risks of AI/AS technology being misused? (Framing the Principle of Education and Awareness)

    • Values to be embedded in AIS are not universal, but rather largely specific
    to user communities and tasks.

    • Moral overload: AIS are usually subject to a multiplicity of norms and values that may conflict with each other.

    • AIS can have built-in data or algorithmic biases that disadvantage members
    of certain groups.

    • Once the relevant sets of norms (of AIS’s specific role in a specific community) have been identified, it is not clear how such norms should be built into
    a computational architecture.

    • Norms implemented in AIS must be compatible with the norms in the relevant community.

    • Achieving a correct level of trust between humans and AIS.

    • Third-party evaluation of AIS’s value alignment.

    • Ethics is not part of degree programs.

    • We need models for interdisciplinary and intercultural education to account for the distinct issues of AI/AS.

    • The need to differentiate culturally distinctive values embedded in AI design.

    • Lack of value-based ethical culture and practices for industry.

    • Lack of values-aware leadership.

    • Lack of empowerment to raise ethical concerns.

    • Lack of ownership or responsibility from tech community.

    • Need to include stakeholders for best context of AI/AS.

    • Poor documentation hinders ethical design.

    • Inconsistent or lacking oversight for algorithms.

    • Lack of an independent review organization.

    • Use of black-box components.

    • As AI systems become more capable— as measured by the ability to optimize more complex objective functions with greater autonomy across a wider variety of domains—unanticipated
    or unintended behavior becomes increasingly dangerous.

    • Retrofitting safety into future, more generally capable, AI systems may be difficult.

    • Researchers and developers will confront a progressively more complex set of ethical and technical safety issues in the development and deployment
    of increasingly autonomous and capable AI systems.

    • Future AI systems may have the capacity to impact the world on the scale of
    the agricultural or industrial revolutions.

    • How can an individual define and organize his/her personal data in the algorithmic era?

    • What is the definition and scope
    of personally identifiable information?

    • What is the definition of control regarding personal data?

    • How can we redefine data access to honor the individual?

    • How can we redefine consent regarding personal data so it honors the individual?

    • Data that appears trivial to share
    can be used to make inferences that an individual would not wish to share.

    • How can data handlers ensure the consequences (positive and negative) of accessing and collecting data
    are explicit to an individual in order to give truly informed consent?

    • Could a person have a personalized AI or algorithmic guardian?

    • Professional organization codes of conduct often have significant loopholes, whereby they overlook holding members’ works, the artifacts and agents they create, to the same values and standards that the members themselves are held to, to the extent that those works can be.

    • Confusions about definitions regarding important concepts in artificial intelligence, autonomous systems, and autonomous weapons systems (AWS) stymie more substantive discussions about crucial issues.

    • AWS are by default amenable to covert and non-attributable use.

    • There are multiple ways in which accountability for AWS’s actions can be compromised.

    • AWS might not be predictable (depending upon its design and operational use). Learning systems compound the problem of predictable use.

    • Legitimizing AWS development sets precedents that are geopolitically dangerous in the medium-term.

    • Exclusion of human oversight from the battlespace can too easily lead to inadvertent violation of human rights and inadvertent escalation of tensions.

    • The variety of direct and indirect customers of AWS will lead to a complex and troubling landscape of proliferation and abuse.

    • By default, the type of automation in AWS encourage rapid escalation of conflicts.

    • There are no standards for design assurance verification of AWS.

    • Understanding the ethical boundaries of work on AWS and semi-autonomous weapons systems can be confusing.

    • Misinterpretation of AI/AS in media is confusing to the public.

    • Automation is not typically viewed only within market contexts.

    • The complexities of employment are being neglected regarding robotics/AI.

    • Technological change is happening too fast for existing methods of (re)training the workforce.

    • Any AI policy may slow innovation.

    • AI and autonomous technologies are not equally available worldwide.

    • There is a lack of access and understanding regarding personal information.

    • An increase of active representation
    of developing nations in The IEEE Global Initiative is needed.

    • The advent of AI and autonomous systems can exacerbate the economic and power-structure differences between and within developed and developing nations.

    • How can we improve the accountability and veri ability in autonomous and intelligent systems?

    • How can we ensure that AI is transparent and respects individual rights?
    For example, international, national,
    and local governments are using AI which impinges on the rights of their citizens who should be able to trust the government, and thus the AI, to protect their rights.

    • How can AI systems be designed to guarantee legal accountability for harms caused by these systems?

    • How can autonomous and intelligent systems be designed and deployed
    in a manner that respects the integrity of personal data?

Comments are closed.