Accepted Papers

The following papers have been accepted and are scheduled to be published as Vol. 33 No. 2 on June 15, 2026.

  • General Paper
    Zhengdong Yang, Sheng Li and Chenhui Chu
    Emotion-aware Speech Translation Correction with Large Language Models
  • General Paper
    Yoshiki Takenami, Yin Jou Huang, Yugo Murawaki, Koh Takeuchi and Chenhui Chu
    Investigation of the Anchoring Effect and its Occurrence Mechanism in LLM-driven Price Negotiation Simulation
  • General Paper
    Koki Maeda, Issa Sugiura, Yusuke Oda, Shuhei Kurita and Naoaki Okazaki
    Cross-Task Evaluation and Empirical Analysis of Japanese Visual Language Models
  • General Paper
    Yuanyuan Cai, Satoshi Kosugi, Kotaro Funakoshi and Manabu Okumura
    Enhancing Image Clustering with Captions
  • General Paper
    Koki Natsumi, Deguchi Hiroyuki, Yusuke Sakai, Hidetaka Kamigaito and Taro Watanabe
    Agreement-Constrained Efficient Probabilistic Minimum Bayes Risk Decoding with Knowledge Distillation Metrics
  • General Paper
    Hayato Tsukagoshi and Ryohei Sasano
    Ruri: Japanese General Text Embeddings
  • General Paper
    Jundai Suzuki, Ryoma Ishigaki and Eisaku Maeda
    AnaToM: A Dataset Generation Framework for Evaluating Theory of Mind Reasoning Toward the Anatomy of Difficulty through Structurally Controlled Story Generation
  • General Paper
    Tsuyoshi Okita, Satoru Katsumata, Keisuke Kamata, Hirokazu Kiyomaru, Takashi Kodama, Jun Suzuki, Hisami Suzuki, Kouta Nakayama, Namgi Han and Yusuke Miyao
    Design and Analysis of a Mathematics and Safety Tuning Competition for Large Language Models
  • General Paper
    Xinger Fu, Masaaki Nagata and Chenhui Chu
    LlmMT+1: Enhancing Non-dominant Language Pair in Large Language Model-based Machine Translation
  • General Paper
    Ryo Hasegawa, Yusuke Sakai, Hidetaka Kamigaito and Taro Watanabe
    The Effect of Knowledge Editing Methods on Confidence Calibration
  • General Paper
    Shintaro Ozaki, Kazuki Hayashi, Yusuke Sakai, Hidetaka Kamigaito, Katsuhiko Hayashi and Taro Watanabe
    Analyzing the Multilingual Ability of Vision-Language Models to Generate Explanations for Artworks
  • General Paper
    Zhi Qu, Yiran Wang, Jiannan Mao, Chenchen Ding, Hideki Tanaka, Masao Utiyama and Taro Watanabe
    MITRE: Efficient Pre-trained Models for Multilingual Neural Machine Translation with Registering
  • General Paper
    Youyuan Lin, Masaaki Nagata and Chenhui Chu
    Automatic Post-editing through Word-level Quality Estimation with Minimum Bayes Risk Decoding