The development of autonomous agents for complex, long-horizon tasks is a central goal in AI. However, dominant training paradigms face a critical limitation: reinforcement learning (RL) methods that optimize solely for final task success often reinforce flawed or inefficient reasoning paths, a problem we term inefficient exploration. This leads to agents that are brittle and fail to generalize, as they learn to find solutions without learning how to reason coherently. To address this, we introduce RLVMR, a novel framework that integrates dense, process-level supervision into end-to-end RL by rewarding verifiable, meta-reasoning behaviors. RLVMR equips an agent to explicitly tag its cognitive steps, such as planning, exploration, and reflection, and provides programmatic, rule-based rewards for actions that contribute to effective problem-solving. These process-centric rewards are combined with the final outcome signal and optimized using a critic-free policy gradient method. On the challenging ALFWorld and ScienceWorld benchmarks, RLVMR achieves new state-of-the-art results, with our 7B model reaching an 83.6% success rate on the most difficult unseen task split. Our analysis confirms these gains stem from improved reasoning quality, including significant reductions in redundant actions and enhanced error recovery, leading to more robust, efficient, and interpretable agents.
Query-focused table summarization requires complex reasoning, often approached through step-by-step natural language (NL) plans. However, NL plans are inherently ambiguous and lack structure, limiting their conversion into executable programs like SQL and hindering scalability, especially for multi-table tasks. To address this, we propose a paradigm shift to structured representations. We introduce a new structured plan, TaSoF, inspired by formalism in traditional multi-agent systems, and a framework, SPaGe, that formalizes the reasoning process in three phases: 1) Structured Planning to generate TaSoF from a query, 2) Graph-based Execution to convert plan steps into SQL and model dependencies via a directed cyclic graph for parallel execution, and 3) Summary Generation to produce query-focused summaries. Our method explicitly captures complex dependencies and improves reliability. Experiments on three public benchmarks show that SPaGe consistently outperforms prior models in both single- and multi-table settings, demonstrating the advantages of structured representations for robust and scalable summarization.
Land-air bimodal robots (LABR) are gaining attention for autonomous navigation, combining high mobility from aerial vehicles with long endurance from ground vehicles. However, existing LABR navigation methods are limited by suboptimal trajectories from mapping-based approaches and the excessive computational demands of learning-based methods. To address this, we propose a two-stage lightweight framework that integrates global key points prediction with local trajectory refinement to generate efficient and reachable trajectories. In the first stage, the Global Key points Prediction Network (GKPN) was used to generate a hybrid land-air keypoint path. The GKPN includes a Sobel Perception Network (SPN) for improved obstacle detection and a Lightweight Attention Planning Network (LAPN) to improves predictive ability by capturing contextual information. In the second stage, the global path is segmented based on predicted key points and refined using a mapping-based planner to create smooth, collision-free trajectories. Experiments conducted on our LABR platform show that our framework reduces network parameters by 14\% and energy consumption during land-air transitions by 35\% compared to existing approaches. The framework achieves real-time navigation without GPU acceleration and enables zero-shot transfer from simulation to reality during
Assigning passenger trips to specific network paths using automatic fare collection (AFC) data is a fundamental application in urban transit analysis. The task is a difficult inverse problem: the only available information consists of each passenger's total travel time and their origin and destination, while individual passenger path choices and dynamic network costs are unobservable, and behavior varies significantly across space and time. We propose a novel Bayesian hierarchical model to resolve this problem by jointly estimating dynamic network costs and passenger path choices while quantifying their uncertainty. Our model decomposes trip travel time into four components -- access, in-vehicle, transfer, and egress -- each modeled as a time-varying random walk. To capture heterogeneous passenger behavior, we introduce a multinomial logit model with spatiotemporally varying coefficients. We manage the high dimensionality of these coefficients using kernelized tensor factorization with Gaussian process priors to effectively model complex spatiotemporal correlations. We develop a tailored and efficient Markov chain Monte Carlo (MCMC) algorithm for model inference. A simulation study demonstrates the method's effectiveness in recovering the underlying model parameters. On a large-scale dataset from the Hong Kong Mass Transit Railway, our framework demonstrates superior estimation accuracy over established benchmarks. The results reveal significant spatiotemporal variations in passenger preferences and provide robust uncertainty quantification, offering transit operators a powerful tool for enhancing service planning and operational management.
The advent of end-to-end autonomy stacks - often lacking interpretable intermediate modules - has placed an increased burden on ensuring that the final output, i.e., the motion plan, is safe in order to validate the safety of the entire stack. This requires a safety monitor that is both complete (able to detect all unsafe plans) and sound (does not flag safe plans). In this work, we propose a principled safety monitor that leverages modern multi-modal trajectory predictors to approximate forward reachable sets (FRS) of surrounding agents. By formulating a convex program, we efficiently extract these data-driven FRSs directly from the predicted state distributions, conditioned on scene context such as lane topology and agent history. To ensure completeness, we leverage conformal prediction to calibrate the FRS and guarantee coverage of ground-truth trajectories with high probability. To preserve soundness in out-of-distribution (OOD) scenarios or under predictor failure, we introduce a Bayesian filter that dynamically adjusts the FRS conservativeness based on the predictor's observed performance. We then assess the safety of the ego vehicle's motion plan by checking for intersections with these calibrated FRSs, ensuring the plan remains collision-free under plausible future behaviors of others. Extensive experiments on the nuScenes dataset show our approach significantly improves soundness while maintaining completeness, offering a practical and reliable safety monitor for learned autonomy stacks.
AI agents powered by large language models are increasingly capable of autonomously completing complex, multi-step tasks using external tools. Yet, they still fall short of human-level performance in most domains including computer use, software development, and research. Their growing autonomy and ability to interact with the outside world, also introduces safety and security risks including potentially misaligned actions and adversarial manipulation. We argue that human-in-the-loop agentic systems offer a promising path forward, combining human oversight and control with AI efficiency to unlock productivity from imperfect systems. We introduce Magentic-UI, an open-source web interface for developing and studying human-agent interaction. Built on a flexible multi-agent architecture, Magentic-UI supports web browsing, code execution, and file manipulation, and can be extended with diverse tools via Model Context Protocol (MCP). Moreover, Magentic-UI presents six interaction mechanisms for enabling effective, low-cost human involvement: co-planning, co-tasking, multi-tasking, action guards, and long-term memory. We evaluate Magentic-UI across four dimensions: autonomous task completion on agentic benchmarks, simulated user testing of its interaction capabilities, qualitative studies with real users, and targeted safety assessments. Our findings highlight Magentic-UI's potential to advance safe and efficient human-agent collaboration.
Existing earthmoving autonomy is largely confined to highly controlled and well-characterized environments due to the complexity of vehicle-terrain interaction dynamics and the partial observability of the terrain resulting from unknown and spatially varying soil conditions. In this chapter, a a soil-property mapping system is proposed to extend the environmental state, in order to overcome these restrictions and facilitate development of more robust autonomous earthmoving. A GPU accelerated elevation mapping system is extended to incorporate a blind mapping component which traces the movement of the blade through the terrain to displace and erode intersected soil, enabling separately tracking undisturbed and disturbed soil. Each interaction is approximated as a flat blade moving through a locally homogeneous soil, enabling modeling of cutting forces using the fundamental equation of earthmoving (FEE). Building upon our prior work on in situ soil-property estimation, a method is devised to extract approximate geometric parameters of the model given the uneven terrain, and an improved physics infused neural network (PINN) model is developed to predict soil properties and uncertainties of these estimates. A simulation of a compact track loader (CTL) with a blade attachment is used to collect data to train the PINN model. Post-training, the model is leveraged online by the mapping system to track soil property estimates spatially as separate layers in the map, with updates being performed in a Bayesian manner. Initial experiments show that the system accurately highlights regions requiring higher relative interaction forces, indicating the promise of this approach in enabling soil-aware planning for autonomous terrain shaping.
Understanding reflectance-related quantities for worlds enables effective comparative planetology and strengthens mission planning and execution. Measurements of these properties for Earth, especially its geometric albedo and phase function, have been difficult to achieve due to our Terrestrial situation -- it is challenging to obtain planetary-scale brightness measurements for the world we stand on. Using a curated dataset of visual phase-dependent, disk-averaged observations of Earth taken from the ground and spacecraft, alongside a physical-statistical model, this work arrives at a definitive value for the visual geometric albedo of our planet: 0.242 (+0.005/-0.004). This albedo constraint is up 30--40% smaller than earlier, widely-quoted values. The physical-statistical model enables retrieval-like inferences to be performed on phase curves, and includes contributions from optically thick clouds, optically thin aerosols, Rayleigh scattering, ocean glint, gas absorption, and Lambertian surface reflectance. Detailed application of this inverse model to Earth's phase curve quantifies contributions of these different processes to the phase-dependent brightness of the Pale Blue Dot. Model selection identifies a scenario where aerosol forward scattering results in a false negative for surface habitability detection. Observations of phase curves for Earth at redder-optical or near-infrared wavelengths could disentangle ocean glint effects from aerosol forward scattering and would help with understanding the utility of phase curve observations for the under-development Habitable Worlds Observatory.
This study addresses the challenge of efficiently assigning locomotives in large freight rail networks, where operational complexity and power imbalances make cost-effective planning difficult. It presents a strategic optimization framework for the Locomotive Assignment Problem (LAP), developed in collaboration with a major North American Class I Freight Railroad. The problem is formulated as a network-based integer program over a cyclic space-time network, producing a repeatable weekly locomotive assignment plan. The model captures a comprehensive set of real-world operational constraints and jointly optimizes the placement of pick-up and set-out locomotive work events, improving the effectiveness of downstream planning. To solve large-scale instances exactly for the first time, novel reduction rules are introduced to dramatically reduce the number of light travel arcs in the space-time network. Extensive computational experiments demonstrate the performance and trade-offs on real instances under a variety of practical constraints. Beyond delivering scalable, high-quality solutions, the proposed framework serves as a practical decision-support tool grounded in the operational realities of modern freight railroads.
The increasing integration of IoT-connected devices in smart grids has introduced new vulnerabilities at the distribution level. Of particular concern is the potential for cyberattacks that exploit high-wattage IoT devices, such as EV chargers, to manipulate local demand and destabilize the grid. While previous studies have primarily focused on such attacks at the transmission level, this paper investigates their feasibility and impact at the distribution level. We examine how cyberattackers can target voltage-sensitive nodes, especially those exposed by the presence of high-consumption devices, to cause voltage deviation and service disruption. Our analysis demonstrates that conventional grid protections are insufficient against these intelligent, localized attacks. To address this, we propose resilience strategies using distributed generation (DGs), exploring their role in preemptive planning. This research highlights the urgent need for distribution-level cyber resilience planning in smart grids.
Accurate load forecasting is essential to the operation of modern electric power systems. Given the sensitivity of electricity demand to weather variability and temporal dynamics, capturing non-linear patterns is essential for long-term planning. This paper presents a comparative analysis of machine learning models, Linear Regression, XGBoost, LightGBM, and Long Short-Term Memory (LSTM), for forecasting system-wide electricity load up to one year in advance. Midterm forecasting has shown to be crucial for maintenance scheduling, resource allocation, financial forecasting, and market participation. The paper places a focus on the use of a method called "Shapley Additive Explanations" (SHAP) to improve model explainability. SHAP enables the quantification of feature contributions, guiding informed feature engineering and improving both model transparency and forecasting accuracy.
We study the variational optimization of entangled probe states for quantum sensing tasks involving the estimation of a structured linear function of local phase parameters. Specifically, we consider scenarios where each qubit in a spin-1/2 array accumulates a phase phi_i = alpha_i * theta, with a known weight vector alpha, reducing the task to single-parameter estimation of theta. Using parameterized quantum circuits composed of dipolar-interacting gates and global rotations, we optimize probe states with respect to the Classical Fisher Information (CFI) using a gradient-free evolutionary strategy. We benchmark the optimized circuits for two relevant cases: (i) uniform encoding, where all qubits contribute equally to the phase function, and (ii) a custom encoding where a central qubit dominates the weight vector. In both cases, the optimized probe states approach the respective entanglement-enhanced (EE) limits dictated by the encoding structure. Our results demonstrate the power of variational approaches for tailoring metrologically useful entanglement to specific estimation tasks in quantum sensor networks.
The emergence of Multimodal Large Language Models (MLLMs) has driven significant advances in Graphical User Interface (GUI) agent capabilities. Nevertheless, existing GUI agent training and inference techniques still suffer from a dilemma for reasoning designs, ineffective reward, and visual noise. To address these issues, we introduce UI-AGILE, a comprehensive framework enhancing GUI agents at both the training and inference stages. For training, we propose a suite of improvements to the Supervised Fine-Tuning (SFT) process: 1) a Continuous Reward function to incentivize high-precision grounding; 2) a "Simple Thinking" reward to balance planning with speed and grounding accuracy; and 3) a Cropping-based Resampling strategy to mitigate the sparse reward problem and improve learning on complex tasks. For inference, we present Decomposed Grounding with Selection, a novel method that dramatically improves grounding accuracy on high-resolution displays by breaking the image into smaller, manageable parts. Experiments show that UI-AGILE achieves the state-of-the-art performance on two benchmarks ScreenSpot-Pro and ScreenSpot-v2. For instance, using both our proposed training and inference enhancement methods brings 23% grounding accuracy improvement over the best baseline on ScreenSpot-Pro.
We propose a framework that enables autonomous vehicles (AVs) to proactively shape the intentions and behaviors of interacting human drivers. The framework employs a leader-follower game model with an adaptive role mechanism to predict human interaction intentions and behaviors. It then utilizes a branch model predictive control (MPC) algorithm to plan the AV trajectory, persuading the human to adopt the desired intention. The proposed framework is demonstrated in an intersection scenario. Simulation results illustrate the effectiveness of the framework for generating persuasive AV trajectories despite uncertainties.
We present a quantum algorithm for solving perfect mazes by casting the pathfinding task as a structured search problem. Building on Grover's amplitude amplification, the algorithm encodes all candidate paths in superposition and evaluates their proximity to the goal using a reversible fitness operator based on quantum arithmetic. A Grover-compatible oracle marks high-fitness states, and an adaptive cutoff strategy refines the search iteratively. We provide formal definitions, unitary constructions, and convergence guarantees, along with a resource analysis showing efficient scaling with maze size and path length. The framework serves as a foundation for quantum-hybrid pathfinding and planning. The full algorithmic pipeline is specified from encoding to amplification, including oracle design and fitness evaluation. The approach is readily extensible to other search domains, including navigation over tree-like or acyclic graphs.
Airborne mobile Integrated Sensing and Communication (ISAC) base stations have garnered significant attention recently, with ISAC technology being a crucial application for 6G networks. Since ISAC can sense potential mobile communication users, this paper studies an effective scheme for a multi-UAV network tailored for emergency communication. In this paper, we develop a temporal-assisted frame structure utilizing integrated omnidirectional and directional beampattern to facilitate efficient and frequent searching, with extended Kalman filtering (EKF) as an aid to beam alignment. Further, we address an optimization problem to maximize the total achievable rate per slot by jointly designing UAV beamforming, load management, and UAV direction planning, all while adhering to the constraints of the predicted beam coverage. Given the problem NP-hard, we introduce three robust mechanisms for its resolution: an enhanced distributed Successive Convex Approximation (SCA)-Iterative Rank Minimization (IRM) algorithm, an coalition game approach, and a Fermat point search method. In particular, the proposed SCA-IRM algorithm decomposes the original complex optimization problem into several sub-problems and assigns them equally to each UAV, so as to realize distributed computing and improve computational efficiency. Our proposed simulations demonstrate the improved system performance in terms of communication rate, fairness, and sensing accuracy, providing design guidelines of UAV-assisted emergency communication networking.
Graph neural networks (GNNs) excel at predictive tasks on graph-structured data but often lack the ability to incorporate symbolic domain knowledge and perform general reasoning. Relational Bayesian Networks (RBNs), in contrast, enable fully generative probabilistic modeling over graph-like structures and support rich symbolic knowledge and probabilistic inference. This paper presents a neuro-symbolic framework that seamlessly integrates GNNs into RBNs, combining the learning strength of GNNs with the flexible reasoning capabilities of RBNs. We develop two implementations of this integration: one compiles GNNs directly into the native RBN language, while the other maintains the GNN as an external component. Both approaches preserve the semantics and computational properties of GNNs while fully aligning with the RBN modeling paradigm. We also propose a maximum a-posteriori (MAP) inference method for these neuro-symbolic models. To demonstrate the framework's versatility, we apply it to two distinct problems. First, we transform a GNN for node classification into a collective classification model that explicitly models homo- and heterophilic label patterns, substantially improving accuracy. Second, we introduce a multi-objective network optimization problem in environmental planning, where MAP inference supports complex decision-making. Both applications include new publicly available benchmark datasets. This work introduces a powerful and coherent neuro-symbolic approach to graph data, bridging learning and reasoning in ways that enable novel applications and improved performance across diverse tasks.
In multi-agent environments, effective interaction hinges on understanding the beliefs and intentions of other agents. While prior work on goal recognition has largely treated the observer as a passive reasoner, Active Goal Recognition (AGR) focuses on strategically gathering information to reduce uncertainty. We adopt a probabilistic framework for Active Goal Recognition and propose an integrated solution that combines a joint belief update mechanism with a Monte Carlo Tree Search (MCTS) algorithm, allowing the observer to plan efficiently and infer the actor's hidden goal without requiring domain-specific knowledge. Through comprehensive empirical evaluation in a grid-based domain, we show that our joint belief update significantly outperforms passive goal recognition, and that our domain-independent MCTS performs comparably to our strong domain-specific greedy baseline. These results establish our solution as a practical and robust framework for goal inference, advancing the field toward more interactive and adaptive multi-agent systems.
Stellar flares and coronal mass ejections (CMEs) can strip planetary atmospheres, reducing the potential habitability of terrestrial planets. While flares have been observed for decades, stellar CMEs remain elusive. Extreme ultraviolet (EUV) emissions are sensitive to both flares and CME-induced coronal dimming. We assess the detectability of stellar CME-induced EUV dimming events by adapting a known "Sun-as-a-star" dimming technique -- validated by the Solar Dynamics Observatory's EUV Variability Experiment (EVE) -- to stellar conditions. We adapt the solar data to reflect a range of stellar intensities, accounting for intrinsic brightness, distance, and interstellar medium (ISM) attenuation. We generate synthetic light curves for two different missions: the legacy EUV Explorer (EUVE) and the proposed ESCAPE mission. Our results indicate that dimming detections are well within reach. EUVE's broadband imager was capable of detecting stellar CMEs -- albeit with limited spectral (temperature) resolution -- but that was not part of the observing plan. EUVE's spectroscopic survey lacked sufficient sensitivity for CME detections. Optimizing modern instrument design for this task would make the observation fully feasible. In this work, we present a tool to explore the stellar-CME detection parameter space. Our tool shows that an instrument with performance similar to ESCAPE, setting a 600-second integration period, and integrating the spectra into bands, any star with an X-ray flux $\geq 2.51 \times 10^{-12}$ergs$^{-1}$~cm$^{-2}$ should have a $\geq 3\sigma$ detection even for a modest few-percent dimming profile, regardless of ISM attenuation. Such measurements would be crucial for understanding the space weather environments of exoplanet host stars and, ultimately, for evaluating planetary habitability.
Retrosynthesis planning remains a central challenge in molecular discovery due to the vast and complex chemical reaction space. While traditional template-based methods offer tractability, they suffer from poor scalability and limited generalization, and template-free generative approaches risk generating invalid reactions. In this work, we propose TempRe, a generative framework that reformulates template-based approaches as sequence generation, enabling scalable, flexible, and chemically plausible retrosynthesis. We evaluated TempRe across single-step and multi-step retrosynthesis tasks, demonstrating its superiority over both template classification and SMILES-based generation methods. On the PaRoutes multi-step benchmark, TempRe achieves strong top-k route accuracy. Furthermore, we extend TempRe to direct multi-step synthesis route generation, providing a lightweight and efficient alternative to conventional single-step and search-based approaches. These results highlight the potential of template generative modeling as a powerful paradigm in computer-aided synthesis planning.
Especially in regions with high solar irradiation, photocatalysis presents a promising low-cost "green" hydrogen production option. Thus, this paper analyzes impacts of increasing photocatalysis shares on the European energy system using an open-source energy system optimization model covering the electricity, industry, and heating sectors with high spatial and temporal resolution. Photocatalysis deployment is investigated at various market shares by exogenously altering photocatalysis costs. The results show that integrating photocatalysis necessitates systematic adjustments since it lacks the flexible load attributes of water electrolysis. Therefore, a significant geographic shift in hydrogen production and demand from the Northwest to South Europe is expected in the case of large-scale photocatalysis adoption. Despite these challenges, installed photocatalysis shows costs within the photocatalysis cost projections. Thus, photocatalysis could contribute to a critical diversification of hydrogen production, easing material demands for other renewable technologies. Nevertheless, it requires strategic planning to avoid lock-ins and to maximize its potential.
Understanding the formation, propagation, and breakdown of the main vortex ring (VR) is essential for characterizing left ventricular (LV) hemodynamics, as its dynamics have been linked to the onset and progression of cardiovascular diseases. In this study, two idealized LV geometries, a semi-ellipsoidal chamber and a more rounded configuration, are analyzed using computational fluid dynamics (CFD) simulations under physiological conditions, with the aim of investigating the fluid mechanisms that govern VR evolution during diastole. Modal decomposition techniques, specifically proper orthogonal decomposition (POD) and higher order dynamic mode decomposition (HODMD), are employed to identify dominant coherent structures and track their temporal behavior. To the authors' knowledge, this is the first time such an analysis is conducted with the explicit goal of unraveling the physics of vortex ring dynamics in idealized ventricular chambers. The comparative approach reveals that geometric morphology plays a central role in modulating the flow: in one case, early interaction between the VR and the ventricular wall, driven by the chamber's shape, triggers strong nonlinear interactions and a more intricate dynamic evolution. In the other, the vortex ring propagates more freely toward the apex before dissipating, resulting in a more organized flow pattern and simpler spectral content. These findings advance the understanding of flow-based indicators relevant to early diagnosis and treatment planning in cardiovascular disease. Moreover, they illustrate how the choice of ventricular geometry can influence not only the simulated hemodynamics, but also the effectiveness of data-driven analysis tools, depending on the clinical context under study.
This paper proposes an algorithm to efficiently solve multistage stochastic programs with block separable recourse where each recourse problem is a multistage stochastic program with stage-wise independent uncertainty. The algorithm first decomposes the full problem into a reduced master problem and subproblems using Adaptive Benders decomposition. The subproblems are then solved by an enhanced SDDP. The enhancement includes (1) valid bounds at each iteration, (2) a path exploration rule, (3) cut sharing among subproblems, and (4) guaranteed {\delta}-optimal convergence. The cuts for the subproblems are then shared by calling adaptive oracles. The key contribution of the paper is the first algorithm for solving this class of problems. The algorithm is demonstrated on a power system investment planning problem with multi-timescale uncertainty. The case study results show that (1) the proposed algorithm can efficiently solve this type of problem, (2) deterministic wind modelling underestimate the objective function, and (3) stochastic modelling of wind leads to different investment decisions.
The long-standing claim of dark matter detection by the DAMA experiment remains a crucial open question in astroparticle physics. A key step towards its independent verification is the development of NaI(Tl)-based detectors with improved sensitivity at low energies. The majority of NaI(Tl)-based experiments rely on conventional photomultiplier tubes (PMTs) as single photon detectors, which present technological limitations in terms of light collection, intrinsic radioactivity and a high noise contribution at keV energies. ASTAROTH is an R&D project developing a NaI(Tl)-based detector where the scintillation light is read out by silicon photomultipliers (SiPM) matrices. SiPMs exhibit high photon detection efficiency, negligible radioactivity, and, most importantly, a dark noise nearly two orders of magnitude lower than PMTs, when operated at cryogenic temperature. To this end, ASTAROTH features a custom-designed cryostat based on a bath of cryogenic fluid, able to safely operate the detector and the read-out electronics down to about 80K. We report the first experimental characterization of 360 g NaI(Tl) detector read out by a large area (5 cm x 5 cm) SiPM matrix. The photoelectron yield obtained with a preliminary configuration is 7.2 photoelectrons/keV, which is rather promising, also in light of several planned developments. The signal-to-noise ratio and the energy threshold attainable with SiPMs is expected to improve the sensitivity for dark matter searches beyond the reach of current-generation PMT-based detectors. This result is the first proof of the viability of this technology and sets a milestone toward the design of future large-scale experiments.
With the rapid advancement of autonomous driving technology, vehicle-to-everything (V2X) communication has emerged as a key enabler for extending perception range and enhancing driving safety by providing visibility beyond the line of sight. However, integrating multi-source sensor data from both ego-vehicles and infrastructure under real-world constraints, such as limited communication bandwidth and dynamic environments, presents significant technical challenges. To facilitate research in this area, we organized the End-to-End Autonomous Driving through V2X Cooperation Challenge, which features two tracks: cooperative temporal perception and cooperative end-to-end planning. Built on the UniV2X framework and the V2X-Seq-SPD dataset, the challenge attracted participation from over 30 teams worldwide and established a unified benchmark for evaluating cooperative driving systems. This paper describes the design and outcomes of the challenge, highlights key research problems including bandwidth-aware fusion, robust multi-agent planning, and heterogeneous sensor integration, and analyzes emerging technical trends among top-performing solutions. By addressing practical constraints in communication and data fusion, the challenge contributes to the development of scalable and reliable V2X-cooperative autonomous driving systems.
In this work, we study how vision-language models (VLMs) can be utilized to enhance the safety for the autonomous driving system, including perception, situational understanding, and path planning. However, existing research has largely overlooked the evaluation of these models in traffic safety-critical driving scenarios. To bridge this gap, we create the benchmark (SafeDrive228K) and propose a new baseline based on VLM with knowledge graph-based retrieval-augmented generation (SafeDriveRAG) for visual question answering (VQA). Specifically, we introduce SafeDrive228K, the first large-scale multimodal question-answering benchmark comprising 228K examples across 18 sub-tasks. This benchmark encompasses a diverse range of traffic safety queries, from traffic accidents and corner cases to common safety knowledge, enabling a thorough assessment of the comprehension and reasoning abilities of the models. Furthermore, we propose a plug-and-play multimodal knowledge graph-based retrieval-augmented generation approach that employs a novel multi-scale subgraph retrieval algorithm for efficient information retrieval. By incorporating traffic safety guidelines collected from the Internet, this framework further enhances the model's capacity to handle safety-critical situations. Finally, we conduct comprehensive evaluations on five mainstream VLMs to assess their reliability in safety-sensitive driving tasks. Experimental results demonstrate that integrating RAG significantly improves performance, achieving a +4.73% gain in Traffic Accidents tasks, +8.79% in Corner Cases tasks and +14.57% in Traffic Safety Commonsense across five mainstream VLMs, underscoring the potential of our proposed benchmark and methodology for advancing research in traffic safety. Our source code and data are available at https://github.com/Lumos0507/SafeDriveRAG.
Modeling and evaluation of automated vehicles (AVs) in mixed-autonomy traffic is essential prior to their safe and efficient deployment. This is especially important at urban junctions where complex multi-agent interactions occur. Current approaches for modeling vehicular maneuvers and interactions at urban junctions have limitations in formulating non-cooperative interactions and vehicle dynamics within a unified mathematical framework. Previous studies either assume predefined paths or rely on cooperation and central controllability, limiting their realism and applicability in mixed-autonomy traffic. This paper addresses these limitations by proposing a modeling framework for trajectory planning and decentralized vehicular control at urban junctions. The framework employs a bi-level structure where the upper level generates kinematically feasible reference trajectories using an efficient graph search algorithm with a custom heuristic function, while the lower level employs a predictive controller for trajectory tracking and optimization. Unlike existing approaches, our framework does not require central controllability or knowledge sharing among vehicles. The vehicle kinematics are explicitly incorporated at both levels, and acceleration and steering angle are used as control variables. This intuitive formulation facilitates analysis of traffic efficiency, environmental impacts, and motion comfort. The framework's decentralized structure accommodates operational and stochastic elements, such as vehicles' detection range, perception uncertainties, and reaction delay, making the model suitable for safety analysis. Numerical and simulation experiments across diverse scenarios demonstrate the framework's capability in modeling accurate and realistic vehicular maneuvers and interactions at various urban junctions, including unsignalized intersections and roundabouts.
Human demonstration data is often ambiguous and incomplete, motivating imitation learning approaches that also exhibit reliable planning behavior. A common paradigm to perform planning-from-demonstration involves learning a reward function via Inverse Reinforcement Learning (IRL) then deploying this reward via Model Predictive Control (MPC). Towards unifying these methods, we derive a replacement of the policy in IRL with a planning-based agent. With connections to Adversarial Imitation Learning, this formulation enables end-to-end interactive learning of planners from observation-only demonstrations. In addition to benefits in interpretability, complexity, and safety, we study and observe significant improvements on sample efficiency, out-of-distribution generalization, and robustness. The study includes evaluations in both simulated control benchmarks and real-world navigation experiments using few-to-single observation-only demonstrations.
A drone trajectory planner should be able to dynamically adjust the safety-efficiency trade-off according to varying mission requirements in unknown environments. Although traditional polynomial-based planners offer computational efficiency and smooth trajectory generation, they require expert knowledge to tune multiple parameters to adjust this trade-off. Moreover, even with careful tuning, the resulting adjustment may fail to achieve the desired trade-off. Similarly, although reinforcement learning-based planners are adaptable in unknown environments, they do not explicitly address the safety-efficiency trade-off. To overcome this limitation, we introduce a Decision Transformer-based trajectory planner that leverages a single parameter, Return-to-Go (RTG), as a \emph{temperature parameter} to dynamically adjust the safety-efficiency trade-off. In our framework, since RTG intuitively measures the safety and efficiency of a trajectory, RTG tuning does not require expert knowledge. We validate our approach using Gazebo simulations in both structured grid and unstructured random environments. The experimental results demonstrate that our planner can dynamically adjust the safety-efficiency trade-off by simply tuning the RTG parameter. Furthermore, our planner outperforms existing baseline methods across various RTG settings, generating safer trajectories when tuned for safety and more efficient trajectories when tuned for efficiency. Real-world experiments further confirm the reliability and practicality of our proposed planner.
This full research paper investigates the impact of generative AI (GenAI) on the learner experience, with a focus on how learners engage with and utilize the information it provides. In e-learning environments, learners often need to navigate a complex information space on their own. This challenge is further compounded in interdisciplinary fields like bioinformatics, due to the varied prior knowledge and backgrounds. In this paper, we studied how GenAI influences information search in bioinformatics research: (1) How do interactions with a GenAI chatbot influence learner orienteering behaviors?; and (2) How do learners identify information scent in GenAI chatbot responses? We adopted an autoethnographic approach to investigate these questions. GenAI was found to support orienteering once a learning plan was established, but it was counterproductive prior to that. Moreover, traditionally value-rich information sources such as bullet points and related terms proved less effective when applied to GenAI responses. Information scents were primarily recognized through the presence or absence of prior knowledge of the domain. These findings suggest that GenAI should be adopted into e-learning environments with caution, particularly in interdisciplinary learning contexts.