About Me

Hello, Iā€™m Meiyun Wang. Currently, I am a 2nd-year PhD student in the Department of Systems Innovation, School of Engineering, at The University of Tokyo.

šŸ’” Research Focus

My research focus is on explainable, trustworthy, and applicable AI.


šŸ”„ Ongoing Project

Causal Reasoning of Large Language Models: Explore the causal reasoning capabilities of LLMs using synthetic data.

Large Language Models as Recommendation Systems: Leveraging LLMs for recommendation tasks with a focus on numerical data.


šŸ“š Previous Projects

1. Mitigating Hallucinations in Large Language Models: Training privacy-sensitive student models using synthetic data generated by large language models.

2. Factor Extraction for Time Series Data Explanation Using Large Language Models: Utilizing large language models to extract factors and make time series forecasting.

3. Risk Prediction by Combining Graph Neural Networks and Temporal Recurrent Neural Networks: Integrating graph neural networks and LSTM for accurate risk prediction.

4. Causality-Oriented Pre-Training Framework and Data Augmentation Methods: Developing a pre-training framework and data augmentation methods focused on causality extraction.

5. Application Exploration of Causality Extraction in Patent Texts: Exploring the application of causality extraction techniques in patent texts.

šŸ”„ News

šŸ“ Publications (first author)

[ACL 2024] LLMFactor

LLMFactor LLMFactor: Extracting Profitable Factors through Prompts for Explainable Stock Movement Prediction
  • In this study, we introduce a novel framework called LLMFactor, which employs Sequential Knowledge-Guided Prompting (SKGP) to identify factors that influence stock movements using LLMs.
  • Meiyun Wang, Kiyoshi Izumi, Hiroki Sakaji, LLMFactor: Extracting Profitable Factors through Prompts for Explainable Stock Movement Prediction, Findings of the Association for Computational Linguistics: ACL, 2024.

[preprint] CausalEnhance

CausalEnhance CausalEnhance: Knowledge-Enhanced Pre-training for Causality Identification and Extraction
  • We introduce CausalEnhance, a novel knowledge-enhanced pre-training method empowered by a rule-based automated annotation system.
  • Meiyun Wang, Kiyoshi Izumi, Hiroki Sakaji.

[WPI 2023] PatentCausality

PatentCausality Discovering new applications: Cross-domain exploration of patent documents using causal extraction and similarity analysis GitHub
  • This study suggests an approach employing causality extraction and similarity analysis to explore a technology's applicability beyond what is explicitly stated in patents.
  • Meiyun Wang, Hiroki Sakaji, Hiroaki Higashitani, Mitsuhiro Iwadare, and Kiyoshi Izumi, World Patent Information, 75:102238, 2023.

šŸŽ– Honors

šŸ’» Internships

  • 2024.08 - 2024.11: Amazon Data Scientist Fellow
  • 2023.11 - 2024.01: Mizuho, Tokyo.
  • 2022.04 - 2022.07: Google STEP, Tokyo.
  • 2020.11 - 2021.08: Tencent, Shenzhen.
  • 2020.08 - 2020.11: PingAn, Shenzhen.