title: ‘AlignVLM: Bridging Vision and Language Latent Spaces for Multimodal Understanding’

Authors

If you created a profile for a user (e.g. the default admin user), write the username (folder name) here

and it will be replaced with their full name and linked to their profile.

authors:

  • Ahmed Masry
  • Juan A. Rodriguez
  • admin
  • Suyuchen Wang
  • Chao Wang
  • Aarash Feizi
  • Akshay Kalkunte Suresh
  • Abhay Puri
  • Xiangru Jian
  • Pierre-André Noël
  • Sathwik Tejaswi Madhusudhan
  • Marco Pedersoli
  • Bang Liu
  • Nicolas Chapados
  • admin
  • Enamul Hoque
  • Christopher Pal
  • Issam H. Laradji
  • David Vazquez
  • Perouz Taslakian
  • Spandana Gella
  • Sai Rajeswar

Author notes (optional)

author_notes: []

date: ‘2025-02-03T00:00:00Z’ doi: ''

Schedule page publish date (NOT publication’s date).

publishDate: ‘2022-06-28T00:00:00Z’

Publication type.

Legend: 0 = Uncategorized; 1 = Conference paper; 2 = Journal article;

3 = Preprint / Working Paper; 4 = Report; 5 = Book; 6 = Book section;

7 = Thesis; 8 = Patent

publication_types: [‘3’]

Publication name and optional abbreviated publication name.

publication: ‘arXiv’ publication_short: ‘arXiv’

abstract: Aligning visual features with language embeddings is a key challenge in vision-language models (VLMs). The performance of such models hinges on having a good connector that maps visual features generated by a vision encoder to a shared embedding space with the LLM while preserving semantic similarity. Existing connectors, such as multilayer perceptrons (MLPs), often produce out-of-distribution or noisy inputs, leading to misalignment between the modalities. In this work, we propose a novel vision-text alignment method, AlignVLM, that maps visual features to a weighted average of LLM text embeddings. Our approach leverages the linguistic priors encoded by the LLM to ensure that visual features are mapped to regions of the space that the LLM can effectively interpret. AlignVLM is particularly effective for document understanding tasks, where scanned document images must be accurately mapped to their textual content. Our extensive experiments show that AlignVLM achieves state-of-the-art performance compared to prior alignment methods. We provide further analysis demonstrating improved vision-text feature alignment and robustness to noise.

# Summary. An optional shortened abstract.

summary: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis posuere tellus ac convallis placerat. Proin tincidunt magna sed ex sollicitudin condimentum.

tags: [Multimodal, Vision-Language Models, Document Understanding, Alignment, Deep Learning]

Display this page in the Featured widget?

featured: true

Custom links (uncomment lines below)

links:

- name: Custom Link

url: http://example.org

url_pdf: ’' url_code: ’' url_dataset: ’' url_poster: ’' url_project: ’' url_slides: ’' url_source: ’' url_video: ''

Featured image

To use, add an image named featured.jpg/png to your page’s folder.

image:

caption: ‘Image credit: Unsplash

focal_point: ''

preview_only: false

Associated Projects (optional).

Associate this publication with one or more of your projects.

Simply enter your project’s folder or file name without extension.

E.g. internal-project references content/project/internal-project/index.md.

Otherwise, set projects: [].

projects:

- DNI

Slides (optional).

Associate this publication with Markdown slides.

Simply enter your slide deck’s filename without extension.

E.g. slides: "example" references content/slides/example/index.md.

Otherwise, set slides: "".

slides: example


Click the Cite button above to demo the feature to enable visitors to import publication metadata into their reference management software.
Create your slides in Markdown - click the Slides button to check out the example.

Paper

Tianyu Zhang
Tianyu Zhang
Ph.D. Student in Machine Learning

My research interests include Algorithmic Game Theory, Agent-based Model Simulator, AI for Climate Change, Multi-agent Reinforcement Learning, Self-supervised Learning, Domain Adaptation. I am still exploring and learning slowly.