SiliconMind Project
AI🔄️IC
AI🔄️IC
Aims to contribute to the expansion of Taiwan’s talent pools in AI model development, data curation, and system implementations
Initial application will be the use of Large Language Models (LLMs) and Reinforcement Learning (RL) in chip designs
Develops next-generation reasoning LLMs broadly applicable to scientific research and educational scenarios.
Develop highly efficient adversarial reinforcement learning methods to reduce resource requirements while enhancing the reasoning performance and accessibility of LLMs.
Enhance LLMs' higher-order abstract reasoning and generalization capabilities, enabling them to tackle complex scientific and educational problems.
Develop human-readable and explainable reasoning processes to improve the transparency of model reasoning and facilitate human-AI collaboration.
Construct a knowledge reasoning framework suitable for localization and cross-disciplinary applications, thereby lowering the technical barrier for professionals utilizing reasoning LLMs.
Leverage the scientific databases and domain expertise of Academia Sinica to build LLM tools that are widely applicable in science education and research.
Introduce two application cases leveraging Taiwan's local advantages for practical validation.
Key Electronic Design Automation (EDA) tasks in Taiwan's semiconductor industry, to verify the model's capability to handle specialized and highly complex tasks.
A Traditional Chinese-centric multilingual and multimodal Large Language Model (LLM), to evaluate the model's abstraction and generalization performance across language and multimodal integration tasks.
PI: Chien-Yao Wang, Academia Sinica
Co-PIs
Academia Sinica: Hong-Yuan Mark Liao, Ai-Chun Pang, Ling-Jyh Chen, Yuan-Hao Chang, Yu Tsao, Li Su, De-Nian Yang, Hen-Hsen Huang, Ti-Rong Wu
Universities:
National Taiwan University: Shih-Hao Hung, Ming-Syan Chen, Chia-Hsiang Yang
National Yang Ming Chiao Tung University: Chen-Yi Lee
National Cheng Kung University: Lih-Yih Chiou
National University of Kaohsiung: Chun-Hsin Wu
National Kaohsiung University of Science and Technology: Yeong-Chau Kuo
Consultants: James C. Liao, Academia Sinica; HT Kung, Harvard University; Yuh-Jye Lee, Academia Sinica
協助擴充台灣在 AI 模型開發、資料整備與系統實作等領域的人才庫
初步將把大型語言模型 (LLMs) 與強化學習 (RL) 應用於晶片設計領域
開發能夠廣泛應用於科學研究與教育場景的新一代推理型 LLM
發展高效率的對抗式強化學習方法,降低資源需求並提升LLM的推理性能與可及性
提升LLM處理更高階抽象推理與泛化能力,使之能應對複雜的科學與教育問題
開發人類可讀且具解釋性的推理過程,以提高模型推理透明度並促進人機協作
建構適合本地化與跨學科之知識推理框架,降低專業人員使用推理型 LLM的技術門檻
透過中央研究院之科學資料庫與專業知識,打造可廣泛應用於科學教育與研究之 LLM 工具
導入兩個具有台灣在地優勢的應用案例進行實務驗證
台灣半導體產業重要的電子設計自動化(EDA)任務,以驗證模型處理專業、高難度任務的能力
以繁體中文為主的多語言多模態大型語言模型(LLM),以評估模型對語言及多模態整合任務的抽象與泛化效能
計畫主持人:王建堯
計畫共同主持人
中央研究院資訊科學研究所、資訊科技創新研究中心:
廖弘源、逄愛君、陳伶志、張原豪、曹昱、蘇黎、楊得年、黃瀚萱、吳廸融
大學:洪士灝(台大)、陳銘憲(台大)、李鎮宜(陽明交大)、邱瀝毅(成大)、楊家驤(台大)、吳俊興(高大)、郭永超(高科大)
計畫顧問:廖俊智、孔祥重、李育杰
Since Feb, 2025