Vision+Language Tasks

We are particularly interested in bridging vision, language and knowledge. Have a check of our VQA demo and FVQA dataset.

Related work:

√     S. Chen, Q. Jin, P. Wang, Q. Wu. Say As You Wish: Fine-grained Control of Image Caption Generation with Abstract Scene Graphs. In: CVPR, 2020.

√     H. Li, P. Wang, C. Shen, A. van den Hengel. Visual Question Answering as Reading Comprehension. In: CVPR, 2019.

√     P. Wang, Q. Wu, C. Shen, A. Dick, A. van den Hengel. FVQA: Fact-based Visual Question Answering. In: TPAMI, 2018.

√     Q. Wu, P. Wang, C. Shen, I. Reid, A. van den Hengel. Are You Talking to Me? Reasoned Visual Dialog Generation through Adversarial Learning. In: CVPR, 2018.

√     Q. Wu, C. Shen, P. Wang, A. Dick, A. van den Hengel. Image Captioning and Visual Question Answering based on Attributes and External Knowledge. In: TPAMI, 2017.

√     P. Wang, Q. Wu, C. Shen, A. van den Hengel. The VQA-Machine: Learning How to Use Existing Vision Algorithms to Answer New Questions. In: CVPR, 2017.

√     P. Wang, Q. Wu, C. Shen, A. van den Hengel, A. Dick. Explicit Knowledge-based Reasoning for Visual Question Answering. In: IJCAI, 2017.

√     Q. Wu, P. Wang, C. Shen, A. Dick, A. van den Hengel. Ask Me Anything: Free-form Visual Question Answering based on Knowledge from External Sources. In: CVPR, 2016.


Updated