LOVA3: Learning to Visual Question Answering, Asking and Assessment

1Show Lab, National University of Singapore, 2Singapore Management University

in NeurIPS 2024

Abstract

Question answering, asking, and assessment are three innate human traits crucial for understanding the world and acquiring knowledge. By enhancing these capabilities, humans can more effectively utilize data, leading to better comprehension and learning outcomes. Current Multimodal Large Language Models (MLLMs) primarily focus on question answering, often neglecting the full potential of questioning and assessment skills. Inspired by the human learning mechanism, we introduce LOVA3, an innovative framework named "Learning tO Visual question Answering, Asking and Assessment," designed to equip MLLMs with these additional capabilities.

Our approach involves the creation of two supplementary training tasks GenQA and EvalQA, aiming at fostering the skills of asking and assessing questions in the context of images. To develop the questioning ability, we compile a comprehensive set of multimodal foundational tasks. For assessment, we introduce a new benchmark called EvalQABench, comprising 64,000 training samples (split evenly between positive and negative samples) and 5,000 validation and testing samples. We posit that enhancing MLLMs with the capabilities to answer, ask, and assess questions will enhance their multimodal comprehension, ultimately improving overall performance. To validate this hypothesis, we train MLLMs using the LOVA3 framework and evaluate them on a range of multimodal datasets and benchmarks. Our results demonstrate consistent performance gains, underscoring the critical role of these additional tasks in fostering comprehensive intelligence in MLLMs.

Key Contributions

LOVA3 - To the best of our knowledge, LOVA3 is the first effort to imbue the asking and assessment abilities in training a robust and intelligent MLLM, inspired from human learning mechanism.

EvalQABench - We build a new benchmark EvalQABench for the VQA correction evaluation as the first effort to advance the development of future research.

Performance Improvement - Training with our proposed LOVA3 framework, we observe consistent improvement on 10 representative benchmarks.

GenQA: Learn to generate diverse VQA pairs for unlabeled images

If one MLLM is able to successfully generate high-quality question-answer pairs based on visual input, it indicates a stronger problem-solving ability and deep visual understanding. To enable the MLLM to ask questions, it is natural for us to gather existing annotated datasets as the training corpus and then train the model to predict both questions and answers.

EvalQA: Learn to assess the correctness of VQA triplet

Automatic Data Generation Pipeline

(1) We demonstrate that leveraging non-commercial MLLMs/LLMs is sufficient to building negative answer and feedback.
(2) We conduct manual data filtering and correction thanks to the sationary error patterns by using offline MLLM and LLM.

Stationary error patterns.

EvalQABench

we introduce a new benchmark, EvalQABench, to address the problem of assessing visual question-answering data. Moreover, instead of merely labeling each VQA pair with "Yes/No", we advocate for integrating feedback into each instance, an important aspect rarely seen in prior multimodal benchmarks.
In total, EvalQABench comprise 64,000 training samples (split evenly between positive and negative samples) and 5,000 validation and testing samples.



Selected 9 representative data types

(1) The question type of our EvalQABench is diverse.
(2) All candidate samples for data creation are from VQAv2 training set.



Results of EvalQABench



Data Statistic

Main Results by training on LLaVA1.5

(1) We posit that enhancing MLLMs with the capabilities to answer, ask, and assess questions will enhance their multimodal comprehension, ultimately improving overall performance.
(2) We do not tune any hyperparameters of training LLaVA1.5.
(3) There is no extra data annotations used for GenQA task. For EvalQA task, we just provide negative answer as the additional annotations.

BibTeX

@inproceedings{
      anonymous2024lova,
      title={{LOVA}3: Learning to Visual Question Answering, Asking and Assessment},
      author={{Zhao, Henry Hengyuan and Zhou, Pan and Gao, Difei and Shou, Mike Zheng},
      booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
      year={2024},
      url={https://openreview.net/forum?id=vIOKLMl6wu}
      }