Guiding the governance of AI for a long and flourishing future
Concordia AI is a Beijing-based social enterprise focused on AI safety and governance.
About
AI is likely the most transformative technology that has ever been invented. Controlling and steering increasingly advanced AI systems is a critical challenge for our time.
Concordia AI aims to ensure that AI is developed and deployed in a way that is safe and aligned with global interests. We provide expert advice on AI safety and governance, support AI safety communities in China, and promote international cooperation on AI safety and governance.
Our Approach
Advising on AI safety and governance
We aim to raise awareness of potential risks from AI and promote best practices to mitigate those risks. We participate in consultations on Chinese government policy, consult for leading AI labs, and collaborate on research reports with Chinese academia.
Supporting technical communities
We aim to create a thriving ecosystem that will drive progress towards safe AI. We convene seminars, run fellowships to train aspiring safety practitioners, and publish educational resources on AI safety for Chinese AI researchers in industry and academia.
Promoting international cooperation
We aim to align strategies for the safe development and deployment of AI globally. We facilitate dialogues between Chinese and international experts and advise multilateral organizations to further technical understanding of AI risks and safety solutions, develop policy ideas, and build trust across communities.
Impact Highlights
BAAI AI Safety and Alignment Forum
Concordia AI was the official co-host and moderator of the AI Safety and Alignment Forum at the Beijing Academy of AI Conference in 2023, which featured speakers such as Sam Altman, Geoffrey Hinton, and Andrew Yao. The Forum was the first-ever event at a major Chinese AI conference to be focused on AI safety and alignment, and the live event was viewed by over 500 people in person and over 200,000 times online. See media coverage on e.g. Wall Street Journal, WIRED, Forbes, AI Era, and Tencent Technology.
China-Western Exchanges on AI Safety
Global Perspective on AI Governance report
Concordia AI Safety and Alignment Fellowship
Concordia AI is currently running China’s first AI Safety & Alignment Fellowship for ~20 machine learning graduate students from China’s top universities. We aim to inspire participants to contribute to AI safety and alignment research by discussing the potential risks and benefits from superintelligence and introducing them to cutting-edge research in the field. The Fellowship curriculum is adapted from the AGI Safety Fundamentals course designed by OpenAI’s Richard Ngo and features a series of online seminars plus a research project component.
Educational resources on AI safety
Concordia AI has worked with Chinese publishers to translate and promote English-language books on AI safety such as Life 3.0, Human Compatible, and most recently, The Alignment Problem for the Chinese audience. We also have a WeChat account (安远AI) where we have published articles related to AI risks and safety including an Alignment Overview Series, an explainer to the Future of Life Institute’s Open Letter to Pause Giant AI Experiments, and a database of AI alignment failures.
Submission to the UN Global Digital Compact
In March 2023, Concordia AI submitted a paper on regulating AI risks to the UN Global Digital Compact. The UN Global Digital Compact is an initiative housed under the Secretary General’s vision, “Our Common Agenda”, and aims at “outlin[ing] shared principles for an open, free and secure digital future for all”. In our paper, we recommended several principles for designing and implementation regulations on AI risks and actions that the UN and other multilateral organizations could take to support the enactment of those principles.
Our Team
Brian Tse
Founder and CEO
Brian is the Founder and CEO of Concordia AI. He is also a Policy Affiliate at the Centre for the Governance of AI. Previously, Brian was Senior Advisor to the Partnership on AI. He co-edited the book Global Perspective on AI Governance published by Tongji University Press. He also served on the program committee of AI safety workshops at AAAI, IJCAI, and ICFEM. Brian has been invited to speak at Stanford, Oxford, Tsinghua, and Peking University on global risk and foresight on advanced AI.
Liang Fang
Senior GOvernance Lead
Liang is a Senior Governance Lead at Concordia AI, where he leads Concordia’s advisory work on AI safety and governance. He was previously a senior technical consultant at Baidu, where he actively promoted the research, communication and implementation of AI ethics and governance. He has participated in the research and formulation of several Chinese government AI and S&T policies.
Kwan Yee Ng
Senior Program Manager
Kwan Yee is a Senior Program Manager at Concordia AI, where she leads projects to promote international cooperation on AI safety and governance. She previously worked with Professor Wang Jisi at the PKU Institute for International and Strategic Studies on numerous research projects and prior to that, she was a research fellow at Oxford University’s Future of Humanity Institute. Kwan Yee received a master’s degree from Peking University as a Yenching Scholar.
Jason Zhou
Senior Research Manager
Jason is a Senior Research Manager at Concordia AI, where he works on promoting international cooperation on AI safety and governance. He previously worked as a Business Advisory Services Manager at the US-China Business Council’s Beijing Office, researching Chinese ICT industry, data security, cybersecurity, and privacy policies. Jason received a Master’s degree from Tsinghua University as a Schwarzman Scholar, where he wrote a thesis on China-US relations.
Yawen Duan
TECHNICAL PROGRAM MANAGER
Yawen is a Technical Program Manager at Concordia AI, where he works on projects to support technical AI safety communities. He is a Future of Life Institite AI Existential Safety PhD Fellow, also an incoming ML PhD student at University of Cambridge focusing on LLM safety and alignment. He has prior experience on AI safety and alignment research at UC Berkeley and David Krueger’s group at the University of Cambridge. His works have been published at ML/CS conferences such as CVPR, ECCV, ICML, ACM FAccT, NeurIPS MLSafety Workshop. Yawen received a MPhil in ML from Cambridge and a BSc from University of Hong Kong.
Muzhe (Yessi) Li
Operations Manager
Muzhe is an Operations Manager at Concordia AI, where she manages the company’s finances, human resources and organizational infrastructure. She was formerly a Sequoia Fellow and worked at Genki Forest as a Strategy Analyst and Product Manager. Prior to that, Muzhe was a Technical Product Manager on Didi Mobility’s International Product Team.
Yunxin Fan
Operations manager
Yunxin is an Operations Manager at Concordia AI, where she oversees the company’s branding strategy and media engagement. Previously, she worked at Dentsu Aegis as Senior Account Manager, focusing on international media strategy for leading tech companies and venture capital firms. She also has previous experience working for Caixin, China’s leading finance & economics journal, and as a consultant for the Economist Intelligence Unit.
Ziya Huang
operations manager
Ziya is an Operations Manager at Concordia AI, where she works on affiliate management as well as legal and compliance issues. Previously, she founded a program that empowered youth to tackle the world’s most pressing problems. Leveraging her experience in the social impact space, she is also on the board of several mission-driven startups.
Affiliates
Concordia’s work is further supported by a network of 18 affiliates, including many PhD students from UC Berkeley, Mila, ETH Zurich, and other top CS/ML programs; researchers from the Centre for the Governance of AI, and the Alignment Research Center; and graduates of other top universities such as Tsinghua, Harvard, and MIT.
We are a social enterprise
We generate income through consulting and advisory projects for investment companies and tech companies in mainland China, Hong Kong, and Singapore. As an independent institution, we are not affiliated to or funded by any government or political groups.