International Association for Safe and Ethical AI
The International Association for Safe and Ethical AI (IASEAI) is a non-profit organization. The organization's stated mission is to address the risks and opportunities associated with advances in artificial intelligence (AI). IASEAI was founded to promote safety and ethics in AI development and deployment. The organization focuses on shaping policy, supporting research, and encouraging a global community of experts and stakeholders.[1] ActivitiesIASEAI is involved in policy development, research and awards, education, and community-building. The organization develops policy analyses related to standards, regulation, international cooperation, and research funding, which are published as position papers. There is also support to research into both the technical and sociotechnical aspects of AI safety and ethics. Inaugural conferenceThe inaugural IASEAI conference, IASEAI '25, was held on February 6–7, 2025 in Paris, prior to the Paris AI Action Summit. The event brought together experts from academia, civil society, industry, and government to discuss developments in AI safety and ethics. The program featured over 40 talks, keynote addresses, and specialized tracks on global coordination, safety engineering, disinformation, interpretability, and AI alignment.[2][3][4] Notable participants included:
The conference also included presentations from early-career researchers and practitioners, such as Aida Brankovic of the Australian e-Health Research Centre (AEHRC). Brankovic presented guidelines developed to mitigate ethical risks in clinical decision support AI systems.[5] Other participants included Georgios Chalkiadakis of the Technical University of Crete.[6] Topics addressed included reinforcement learning from human feedback (RLHF), AI governance, regulatory frameworks, agentic AI, misinformation, and transparency. Geoffrey Hinton's keynote, What Is Understanding?, explored how AI systems process meaning. Gillian Hadfield called for anticipatory legal capacity, and Evi Micha introduced a framework for aligning AI using “linear social choice.”[3] The conference was noted for its emphasis on AI safety in contrast to the broader Paris AI Action Summit, which some observers said focused more on economic and geopolitical aspects of AI. Attendee Paul Salmon, a professor of human factors, criticized the broader summit for sidelining safety issues in favor of commercial narratives and outlined five “comforting myths” that obscure public understanding of AI risks.[7] At the conclusion of the event, IASEAI issued a ten-point Call to Action for lawmakers, researchers, and civil society, recommending global cooperation, binding safety standards, and expanded public research funding.[8] Board and committee
The steering committee includes:
References
|
Portal di Ensiklopedia Dunia