My Humble Thoughts - Collaboration is What Singapore Needs
The primary and most pressing challenge for Responsible AI is in ensuring AI Safety and trustworthiness, but the complexity of the problem has grown far beyond our expectations. The current international landscape for RAI is defined by the urgent technical challenge of AI assurance in existing frontier “black-box” models, which are managed through complex multi-angled analysis techniques and sophisticated control methods towards safe and trustworthy outputs.
Background: From Global to Local
This global reality establishes the local context, where Singapore’s approach to technical AI safety is clearly laid out in the recent Singapore Consensus [1], which proposes a comprehensive, multi-pronged model based on three integrated domains: Risk Assessment, Development, and Control. Also, while Singapore is demonstrating a greater focus on AI safety as a regional research hub, critics have pointed out existing efforts are focused on local languages and dialects rather than the development of frontier capabilities; as a result, existing safety efforts are in an early stage and present safeguards techniques are characterized as relatively fundamental [2].
While Singapore has a solid foundation with current initiatives and grassroots efforts, the path forward is two-fold - on top of us having to move beyond fundamental testing systems towards tackling frontier AI capabilities, we must also double down on existing collaborations as part of our status as a global AI research hub across the Whole-of-Government and with international stakeholders.
It is also worth acknowledging the strong efforts in the local scene. For one, while various technological arms in the public sector primarily functioned as drivers of digital products development and production for domestic use cases, they have recently been enhancing efforts in AI safety, as seen by local various efforts and initiatives educating AI practitioners as well as the conceptualisation and development of new public use cases, complementing existing Whole-of-Government (WOG) efforts. On a wider scale, these efforts are also supported by enhanced coordination within and beyond WOG, such as through collaborative platforms like Lorong AI, and industry-academia collaborations like the TRANS Grant research initiative.
Pillar 1: Fostering Stronger Collaborative Efforts
Given this landscape, the vital question is, ‘So what?’; I believe we can tap on the existing AI landscape plus initiatives to translate existing technical challenges into potential opportunities to lead as a global research hub in the coming 2-3 years. First, we can double down on our core strengths as a recognized regional research hub with deep human capital and able to engage with both local and international AI communities. We can definitely foster stronger collaborative efforts both within and beyond WOG and with academia, building on the aforementioned initiatives to bring researchers working on fundamental, complex problems into the public safety domain.
Pillar 2: Pushing to Address Frontier Capabilities
Second, while existing efforts have concentrated on local, SEA linguistic-based solutions and foundational safeguards, I believe we can extend our goal and aim for frontier capabilities, elevating our local status to the international level. This is especially beneficial as existing local directions are intertwined with fundamental, unresolved issues of Responsible AI. For example, while existing efforts correctly identify finetuning as a means toward alignment of AI systems, this approach faces deep technical challenges as current methods do not necessarily guarantee inner alignment [3] and may instead mask deception [4]. This failure could then pose as a public safety risk across foundational AI models. Additionally, while existing efforts highlight model understanding as a primary component as part of the existing approach towards safe deployment of AI, but such concepts like mechanistic interpretability still remain an open research question actively being investigated [5].
While the development of these AI assurance techniques pose as a complex and difficult challenge, I believe these open technical questions present Singapore with a strategic dual opportunity. By actively engaging with these issues, we can simultaneously ensure the safety of our public AI applications and solidify our role as a key contributor to global AI safety research and standards.
Enjoy Reading This Article?
Here are some more articles you might like to read next: