Next Article in Journal
Design, Fabrication, and Evaluation of 3D Biopotential Electrodes and Intelligent Garment System for Sports Monitoring
Previous Article in Journal
Study on Mechanical and Acoustic Emission Characteristics of Backfill–Rock Instability under Different Stress Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

DriveLLaVA: Human-Level Behavior Decisions via Vision Language Model

1
College of Automotive Engineering, Jilin University, Changchun 130025, China
2
Graduate School of Information and Science Technology, The University of Tokyo, Tokyo 113-8654, Japan
3
National Key Laboratory of Automotive Chassis Integration and Bionics, Jilin University, Changchun 130025, China
*
Author to whom correspondence should be addressed.
Submission received: 23 May 2024 / Revised: 21 June 2024 / Accepted: 22 June 2024 / Published: 25 June 2024
(This article belongs to the Section Vehicular Sensing)

Abstract

Human-level driving is the ultimate goal of autonomous driving. As the top-level decision-making aspect of autonomous driving, behavior decision establishes short-term driving behavior strategies by evaluating road structures, adhering to traffic rules, and analyzing the intentions of other traffic participants. Existing behavior decisions are primarily implemented based on rule-based methods, exhibiting insufficient generalization capabilities when faced with new and unseen driving scenarios. In this paper, we propose a novel behavior decision method that leverages the inherent generalization and commonsense reasoning abilities of visual language models (VLMs) to learn and simulate the behavior decision process in human driving. We constructed a novel instruction-following dataset containing a large number of image–text instructions paired with corresponding driving behavior labels, to support the learning of the Drive Large Language and Vision Assistant (DriveLLaVA) and enhance the transparency and interpretability of the entire decision process. DriveLLaVA is fine-tuned on this dataset using the Low-Rank Adaptation (LoRA) approach, which efficiently optimizes the model parameter count and significantly reduces training costs. We conducted extensive experiments on a large-scale instruction-following dataset, and compared with state-of-the-art methods, DriveLLaVA demonstrated excellent behavior decision performance. DriveLLaVA is capable of handling various complex driving scenarios, showing strong robustness and generalization abilities.
Keywords: autonomous driving; behavior decision; visual language model; instruction fine-tuning autonomous driving; behavior decision; visual language model; instruction fine-tuning

Share and Cite

MDPI and ACS Style

Zhao, R.; Yuan, Q.; Li, J.; Fan, Y.; Li, Y.; Gao, F. DriveLLaVA: Human-Level Behavior Decisions via Vision Language Model. Sensors 2024, 24, 4113. https://0-doi-org.brum.beds.ac.uk/10.3390/s24134113

AMA Style

Zhao R, Yuan Q, Li J, Fan Y, Li Y, Gao F. DriveLLaVA: Human-Level Behavior Decisions via Vision Language Model. Sensors. 2024; 24(13):4113. https://0-doi-org.brum.beds.ac.uk/10.3390/s24134113

Chicago/Turabian Style

Zhao, Rui, Qirui Yuan, Jinyu Li, Yuze Fan, Yun Li, and Fei Gao. 2024. "DriveLLaVA: Human-Level Behavior Decisions via Vision Language Model" Sensors 24, no. 13: 4113. https://0-doi-org.brum.beds.ac.uk/10.3390/s24134113

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop