Bridging the Gap: My PhD Journey at RISC-V Summit 2025

The RISC-V Summit North America 2025 (October 21-23) in Santa Clara was an absolutely pivotal event for my research. Having completed my first year of PhD focused on the next frontier of RISC-V verification—specifically utilizing Large Language Models (LLMs) to verify complex Out-of-Order (OoO) and superscalar cores; this summit felt less like a conference and more like a direct injection of industry validation and technical insight.

The convergence of high-performance RISC-V cores (like Tenstorrent’s Ascalon and Nuclei’s UX1030H) with AI advancements meant that the core questions driving my thesis were being debated and solved in real-time on the show floor. I spent three packed days deliberately targeting companies and experts working at this intersection, gaining invaluable clarity that confirmed and sharpened my research direction.

That’s me, mandatory pic wearing formal! thanks to the media team for this wonderful picture, @kevkitphoto

🗓️ Day 0: Deep Dives into Golden Models, Debug, and Certification

Tuesday, October 21st (Member Day), was essential for diving into the backbone of RISC-V verification: the official standards and tooling. My schedule was laser-focused on the committees and working groups defining verification primitives.

Standards for Simulation and Formal Verification

The most critical sessions focused on the infrastructure that makes reliable verification possible:

  • The State of the Sail RISC-V Model by Prashanth Mundkur was mandatory viewing. The Sail reference model is the mathematical “golden standard” against which all commercial RISC-V designs must be verified. Understanding the latest development status, generated artifacts (like relaxed-concurrency memory semantics), and future plans of the golden model working group directly informs how I must structure my LLM-generated test scenarios to ensure architectural compliance.
  • RISC-V Unified Database: Past, Present, and Future by Paul Clarke (Ventana) and Derek Hower (Qualcomm) clarified the importance of a machine-readable ISA representation. This UDB is crucial because if I want an LLM to generate targeted, complex verification stimulus for an OoO core, that LLM must be trained or fine-tuned on a structured, unambiguous description of the ISA—exactly what the UDB provides.
  • Certification Steering Committee Update reinforced the market need for my work. James Ball (Qualcomm) and Bilal Zafar (10xEngineers) discussed the framework for RISC-V Certification, confirming that rigorous verification, beyond basic compliance testing, is paramount for industry confidence and adoption in high-reliability segments.

Hands-On Tooling and Directed Testing

I sought out sessions that detailed practical verification methodologies, often intersecting with Tenstorrent’s push for open-source verification tools:

  • RiESCUE-D: A Powerful Framework for Directed Test Development in the RISC-V Ecosystem presented by Darshak Koshiya (Tenstorrent) was particularly insightful. Directed testing remains a critical pillar in verifying complex microarchitectural corner cases (like pipeline hazards in an OoO design). Their open-sourcing of this extensible framework shows the direction the industry is moving: away from proprietary, black-box testing towards community-driven, verifiable methodologies.

My participation in the Performance Analysis SIG and the Event Trace TG also yielded technical gold. Discussing the intricacies of implementing features like Event Trace for minimal-overhead profiling directly impacts how I can use LLMs not just to find bugs, but to characterize performance bottlenecks in my target superscalar architectures.


💡 Day 1: Keynotes, OoO Validation, and the AI Nexus

Wednesday, October 22nd, was the central day, packed with keynotes that set the stage for the industry’s focus on high-performance compute and AI—the precise target domain of my research.

Keynotes

Keynotes: The High-Performance Mandate

The morning keynotes solidified the importance of high-performance, complex RISC-V implementations:

  • Keynote: Paving the Road to Datacenter-Scale RISC-V by Martin Dixon (Google) detailed the shift to specialized, heterogeneous datacenters. He explicitly mentioned how AI was used to automate the complex porting of their legacy systems—a massive vote of confidence for my work using LLMs in a development pipeline.
  • Keynote: RISC-V Outperforming Expectations by Richard Wawrzyniak (The SHD Group) highlighted that RISC-V-enabled silicon is projected to exceed 20 Billion units by 2031. This explosive adoption in high-value segments necessitates entirely new, scalable verification techniques—a gap my LLM-driven research aims to fill.
  • A RISCy Approach to Microprocessor Technology by David Patterson served as a grounding vision. Hearing from one of the founders about the necessity of open, flexible architectures reinforced the ‘why’ behind the complexity I am trying to verify.

Verification of Complex Cores: My Core Focus

Posters Section

The technical sessions provided the direct validation of my research themes:

  • Verifying a Complex RISC-V Processor Using Test Generation and Hardware Emulation Techniques by Aimee Sutton and Weihua Han (Synopsys) was arguably the most relevant talk of the entire summit. It showcased a case study using the open-source XiangShan OoO core and discussed adapting test generators for emulation. This proved that Automated Test Generation is the critical bottleneck in verifying these complex designs, reinforcing the necessity of generative AI for stimulus creation.
  • Verifying Out-Of-Order RISC-V Vector Extension With Open Source Tools by Sharvil Panandikar and Amit Kumar (Tenstorrent) demonstrated a tangible open-source framework (Riescue C/D) dedicated to verifying complex features like Out-of-Order (OoO) RVV implementations. This showed me precisely the kind of advanced, microarchitecturally-aware testing my LLM output must seamlessly integrate with.
  • Accelerating Software Development for High Performance Chiplet-based Compute Using Virtual Prototype by Rae Parnmukh (Tenstorrent) and Larry Lapides (Synopsys) highlighted the critical path problem: software development must start before silicon is ready. Virtual prototypes (fast simulators) require verified models, which in turn demands faster, smarter verification methods.

Networking Success: Confirming the LLM Direction

This day was immensely productive for networking. I spoke at length with engineers from Tenstorrent and Synopsys, specifically targeting individuals involved in test generation and DV methodology. I got direct feedback that their teams are actively looking at LLMs for test-case generation, though they are currently grappling with coherence and architectural compliance (the exact problems my PhD is tackling!). This confirmed I am on the correct research path.


🤝 Day 2: LLMs in the Wild, Debug, and The Future of DV

Demo Theater

Thursday, October 23rd, consolidated the theme of AI’s direct impact on chip design and verification, offering a clear vision of the future industry landscape.

Verification Meets AI: The Ultimate Validation

The highlight sessions directly related to my goal of using generative models for verification:

  • Enhancing Coverage with AI-Driven Verification presented in the AI/ML Poster Sessions by Madhulima Tewari and Kenneth Roe (Verifaix Inc) was a direct technical hit. It demonstrated active commercial interest in using AI, not just for bug triage, but for improving the core metric of verification: coverage closure. This is a tangible outcome I must aim for with my LLM-generated tests.
  • Efficient RISC-V Processor Customization: Minimizing Verification Efforts by Zdeněk Přikryl (Codasip) showed how design tools must be coupled with smart verification flows. His emphasis on customization perfectly aligns with the flexible, configurable nature of LLM-generated stimulus.
  • RISC-V System-level Certification from Verification Foundations by Adnan Hamid (Breker Verification Systems) provided the final framework. It confirmed that the industry is progressing from micro-architectural core testing to full system-level verification, requiring stimulus that exercises complex interactions—a complexity LLMs are uniquely suited to model.
  • Unleashing ML Processing Power Through RISC-V Vectors: Applications, Implementation and Verification by Brian Barker (Breker) detailed how RVV complexity increases the verification burden dramatically, further justifying the need for generative, intelligent test systems.

High-Performance and Software Validation

I focused on sessions that detailed the operational complexity of high-performance, superscalar RISC-V cores, ensuring my generated tests are realistic:

  • How NOT To Program an Out-of-order Vector Processor by Dongjie Xie (Tenstorrent) offered crucial insights into the performance pitfalls of writing code for OoO vector processors. Understanding these microarchitectural quirks is essential for teaching an LLM to generate effective stress-case tests that reveal true design flaws.
  • SBI V3.0: Fueling the Next Wave of RISC-V System Software Innovation by Atish Patra (Rivos) and Anup Patel (Ventana) detailed new extensions for RAS (Reliability, Availability, and Serviceability) and native debugging. These features are prime targets for verification and are perfect examples of complex, stateful architectural features that LLMs can systematically test.

Final Connections and Outlook

The summit concluded with the Keynote Panel: Linux and RISC-V: Principles for a Winning Partnership, which cemented the ecosystem’s maturity. The discussions throughout the week with senior experts, particularly during the Demo Theater sessions (like Tenstorrent’s update on their Ascalon processor and Synopsys’s tooling demos), reinforced that RISC-V is now a serious high-performance contender.

The biggest takeaway for my PhD is a resounding confirmation of my research direction. The problem is real, the industry needs intelligent, scalable verification solutions like the one I’m proposing with LLMs, and the technological building blocks—like the RISC-V UDB and open-source OoO cores—are finally in place. This summit provided the technical validation and the industry connections necessary to propel my research forward with confidence. I eagerly anticipate showcasing my results at the next summit!