![Let The Machine Translate The Manga!](https://aisholar.s3.ap-northeast-1.amazonaws.com/media/January2021/%E6%BC%AB%E7%94%BB%E3%82%92%E3%82%82%E3%81%A3%E3%81%A8%E5%A4%9A%E3%81%8F%E3%81%AE%E4%BA%BA%E3%81%AB2-min.png)
Let The Machine Translate The Manga!
3つの要点
✔️ マルチモーダルな文脈を意識した漫画翻訳
✔️ 最初の(多言語)漫画翻訳ベンチマーク&データセット
✔️ 初の画像から画像への完全自動化マンガ翻訳フレームワーク
Towards Fully Automated Manga Translation
written by Ryota Hinami, Shonosuke Ishiwatari, Kazuhiko Yasuda, Yusuke Matsui
(Submitted on 28 Dec 2020 (v1), last revised 9 Jan 2021 (this version, v3))
Comments: Accepted to AAAI2021
Subjects: Computation and Language (cs.CL)![]()
![]()
![]()
Introduction
Japanese cartoons (and webtoon: Korean, manhua: Chinese) are popular all over the world, and translating them manually line by line into other languages is a resource-intensive task. While there has been significant progress in the development of neural machine translation (NMT) models (Deepl & Google), their application to manga translation has been relatively limited. This is mainly due to the fact that text and images are mixed in cartoons. Also, the text boxes are not perfectly aligned and the reading order is not always consistent. Furthermore, some Japanese words can be translated as he, him, she, or her depending on the context. Therefore, without contextual information from the manga panels, an accurate translation may not be possible. In this paper, we present a novel context-aware multimodal translation system for manga translation to solve these problems. It also serves as the first benchmark new dataset for manga translation.
To read more,
Please register with AI-SCHOLAR.
ORCategories related to this article