planning - 2025-10-08

Drive&Gen: Co-Evaluating End-to-End Driving and Video Generation Models

Authors:Jiahao Wang, Zhenpei Yang, Yijing Bai, Yingwei Li, Yuliang Zou, Bo Sun, Abhijit Kundu, Jose Lezama, Luna Yue Huang, Zehao Zhu, Jyh-Jing Hwang, Dragomir Anguelov, Mingxing Tan, Chiyu Max Jiang
Date:2025-10-07 17:58:32

Recent advances in generative models have sparked exciting new possibilities in the field of autonomous vehicles. Specifically, video generation models are now being explored as controllable virtual testing environments. Simultaneously, end-to-end (E2E) driving models have emerged as a streamlined alternative to conventional modular autonomous driving systems, gaining popularity for their simplicity and scalability. However, the application of these techniques to simulation and planning raises important questions. First, while video generation models can generate increasingly realistic videos, can these videos faithfully adhere to the specified conditions and be realistic enough for E2E autonomous planner evaluation? Second, given that data is crucial for understanding and controlling E2E planners, how can we gain deeper insights into their biases and improve their ability to generalize to out-of-distribution scenarios? In this work, we bridge the gap between the driving models and generative world models (Drive&Gen) to address these questions. We propose novel statistical measures leveraging E2E drivers to evaluate the realism of generated videos. By exploiting the controllability of the video generation model, we conduct targeted experiments to investigate distribution gaps affecting E2E planner performance. Finally, we show that synthetic data produced by the video generation model offers a cost-effective alternative to real-world data collection. This synthetic data effectively improves E2E model generalization beyond existing Operational Design Domains, facilitating the expansion of autonomous vehicle services into new operational contexts.

Vision-Guided Targeted Grasping and Vibration for Robotic Pollination in Controlled Environments

Authors:Jaehwan Jeong, Tuan-Anh Vu, Radha Lahoti, Jiawen Wang, Vivek Alumootil, Sangpil Kim, M. Khalid Jawed
Date:2025-10-07 17:19:22

Robotic pollination offers a promising alternative to manual labor and bumblebee-assisted methods in controlled agriculture, where wind-driven pollination is absent and regulatory restrictions limit the use of commercial pollinators. In this work, we present and validate a vision-guided robotic framework that uses data from an end-effector mounted RGB-D sensor and combines 3D plant reconstruction, targeted grasp planning, and physics-based vibration modeling to enable precise pollination. First, the plant is reconstructed in 3D and registered to the robot coordinate frame to identify obstacle-free grasp poses along the main stem. Second, a discrete elastic rod model predicts the relationship between actuation parameters and flower dynamics, guiding the selection of optimal pollination strategies. Finally, a manipulator with soft grippers grasps the stem and applies controlled vibrations to induce pollen release. End-to-end experiments demonstrate a 92.5\% main-stem grasping success rate, and simulation-guided optimization of vibration parameters further validates the feasibility of our approach, ensuring that the robot can safely and effectively perform pollination without damaging the flower. To our knowledge, this is the first robotic system to jointly integrate vision-based grasping and vibration modeling for automated precision pollination.

Medical Vision Language Models as Policies for Robotic Surgery

Authors:Akshay Muppidi, Martin Radfar
Date:2025-10-07 15:54:34

Vision-based Proximal Policy Optimization (PPO) struggles with visual observation-based robotic laparoscopic surgical tasks due to the high-dimensional nature of visual input, the sparsity of rewards in surgical environments, and the difficulty of extracting task-relevant features from raw visual data. We introduce a simple approach integrating MedFlamingo, a medical domain-specific Vision-Language Model, with PPO. Our method is evaluated on five diverse laparoscopic surgery task environments in LapGym, using only endoscopic visual observations. MedFlamingo PPO outperforms and converges faster compared to both standard vision-based PPO and OpenFlamingo PPO baselines, achieving task success rates exceeding 70% across all environments, with improvements ranging from 66.67% to 1114.29% compared to baseline. By processing task observations and instructions once per episode to generate high-level planning tokens, our method efficiently combines medical expertise with real-time visual feedback. Our results highlight the value of specialized medical knowledge in robotic surgical planning and decision-making.

The DISTANT Design for Remote Transmission and Steering Systems for Planetary Robotics

Authors:Cristina Luna, Alba Guerra, Almudena Moreno, Manuel Esquer, Willy Roa, Mateusz Krawczak, Robert Popela, Piotr Osica, Davide Nicolis
Date:2025-10-07 14:39:40

Planetary exploration missions require robust locomotion systems capable of operating in extreme environments over extended periods. This paper presents the DISTANT (Distant Transmission and Steering Systems) design, a novel approach for relocating rover traction and steering actuators from wheel-mounted positions to a thermally protected warm box within the rover body. The design addresses critical challenges in long-distance traversal missions by protecting sensitive components from thermal cycling, dust contamination, and mechanical wear. A double wishbone suspension configuration with cardan joints and capstan drive steering has been selected as the optimal architecture following comprehensive trade-off analysis. The system enables independent wheel traction, steering control, and suspension management whilst maintaining all motorisation within the protected environment. The design meets a 50 km traverse requirement without performance degradation, with integrated dust protection mechanisms and thermal management solutions. Testing and validation activities are planned for Q1 2026 following breadboard manufacturing at 1:3 scale.

Weighted Food Webs Make Computing Phylogenetic Diversity So Much Harder

Authors:Jannik Schestag
Date:2025-10-07 13:22:27

Phylogenetic trees represent certain species and their likely ancestors. In such a tree, present-day species are leaves and an edge from u to v indicates that u is an ancestor of v. Weights on these edges indicate the phylogenetic distance. The phylogenetic diversity (PD) of a set of species A is the total weight of edges that are on any path between the root of the phylogenetic tree and a species in A. Selecting a small set of species that maximizes phylogenetic diversity for a given phylogenetic tree is an essential task in preservation planning, where limited resources naturally prevent saving all species. An optimal solution can be found with a greedy algorithm [Steel, Systematic Biology, 2005; Pardi and Goldman, PLoS Genetics, 2005]. However, when a food web representing predator-prey relationships is given, finding a set of species that optimizes phylogenetic diversity subject to the condition that each saved species should be able to find food among the preserved species is NP-hard [Spillner et al., IEEE/ACM, 2008]. We present a generalization of this problem, where, inspired by biological considerations, the food web has weighted edges to represent the importance of predator-prey relationships. We show that this version is NP-hard even when both structures, the food web and the phylogenetic tree, are stars. To cope with this intractability, we proceed in two directions. Firstly, we study special cases where a species can only survive if a given fraction of its prey is preserved. Secondly, we analyze these problems through the lens of parameterized complexity. Our results include that finding a solution is fixed-parameter tractable with respect to the vertex cover number of the food web, assuming the phylogenetic tree is a star.

The Safety Challenge of World Models for Embodied AI Agents: A Review

Authors:Lorenzo Baraldi, Zifan Zeng, Chongzhe Zhang, Aradhana Nayak, Hongbo Zhu, Feng Liu, Qunli Zhang, Peng Wang, Shiming Liu, Zheng Hu, Angelo Cangelosi, Lorenzo Baraldi
Date:2025-10-07 12:35:09

The rapid progress in embodied artificial intelligence has highlighted the necessity for more advanced and integrated models that can perceive, interpret, and predict environmental dynamics. In this context, World Models (WMs) have been introduced to provide embodied agents with the abilities to anticipate future environmental states and fill in knowledge gaps, thereby enhancing agents' ability to plan and execute actions. However, when dealing with embodied agents it is fundamental to ensure that predictions are safe for both the agent and the environment. In this article, we conduct a comprehensive literature review of World Models in the domains of autonomous driving and robotics, with a specific focus on the safety implications of scene and control generation tasks. Our review is complemented by an empirical analysis, wherein we collect and examine predictions from state-of-the-art models, identify and categorize common faults (herein referred to as pathologies), and provide a quantitative evaluation of the results.

Short-Pulse High-Power THz Generation Using Optical Klystron FELs: Simulation Results

Authors:Najmeh Mirian
Date:2025-10-07 12:05:28

We investigate the feasibility of extending the optical klystron (OK) concept into the terahertz (THz) regime, where strong radiation slippage and diffraction fundamentally challenge efficient free- electron laser (FEL) operation. Numerical simulations were carried out for resonant wavelengths of 10, 30, and 100 {\mu}m using parameters relevant to the planned DALI facility. The results show that while long wavelengths exhibit rapid energy growth, they suffer from significant temporal broadening due to slippage, whereas shorter wavelengths require large dispersive strengths R56 to achieve sufficient bunching. Harmonic bunching is demonstrated as a viable alternative to reduce the required R56 at short wavelengths. Diffraction was analyzed and found not to limit the present design, as the radiation spot size remains well within the beamline aperture. To address the slippage challenge, we propose and numerically demonstrate a novel chicane-embedded optical delay scheme, which restores phase alignment between the radiation and microbunched electrons. Simulations confirm that careful tuning of the dispersive strengths allows staged amplification, preserving beam quality and reaching multi-megawatt output power. These results highlight the potential of THz- tailored optical klystrons to generate compact, short, and high-intensity THz pulses, and lay the groundwork for future experimental studies and facility implementation.

An opportunity to improve Data Center Efficiency: Optimizing the Server's Upgrade Cycle

Authors:Panagiota Nikolaou, Freddy Gabbay, Jawad Haj-Yahya, Yiannakis Sazeides
Date:2025-10-07 11:06:46

This work aims to improve a data center's efficiency by optimizing the server upgrade plan: determine the optimal timing for replacing old servers with new ones. The opportunity presented by this approach is demonstrated through a study based on historical server data. The study establishes a significant opportunity to increase the QPS/(TCOxCO2) metric by formulating a global upgrade plan at the data center's design time covering its entire life cycle. This plan leverages information, such as server entry year, performance, and active power consumption for both existing and future servers. Our findings reveal that an optimal global upgrade plan, may involve upgrades at non fixed time periods and outperforms local upgrade plans. Local upgrade plans follow a fixed, equal-length cycle and make decisions based only on currently available server models. These local plans select the best available server at each upgrade cycle without accounting for future server releases.

Precise and Efficient Collision Prediction under Uncertainty in Autonomous Driving

Authors:Marc Kaufeld, Johannes Betz
Date:2025-10-07 09:50:24

This research introduces two efficient methods to estimate the collision risk of planned trajectories in autonomous driving under uncertain driving conditions. Deterministic collision checks of planned trajectories are often inaccurate or overly conservative, as noisy perception, localization errors, and uncertain predictions of other traffic participants introduce significant uncertainty into the planning process. This paper presents two semi-analytic methods to compute the collision probability of planned trajectories with arbitrary convex obstacles. The first approach evaluates the probability of spatial overlap between an autonomous vehicle and surrounding obstacles, while the second estimates the collision probability based on stochastic boundary crossings. Both formulations incorporate full state uncertainties, including position, orientation, and velocity, and achieve high accuracy at computational costs suitable for real-time planning. Simulation studies verify that the proposed methods closely match Monte Carlo results while providing significant runtime advantages, enabling their use in risk-aware trajectory planning. The collision estimation methods are available as open-source software: https://github.com/TUM-AVS/Collision-Probability-Estimation

Stable Robot Motions on Manifolds: Learning Lyapunov-Constrained Neural Manifold ODEs

Authors:David Boetius, Abdelrahman Abdelnaby, Ashok Kumar, Stefan Leue, Abdalla Swikir, Fares J. Abu-Dakka
Date:2025-10-07 09:16:48

Learning stable dynamical systems from data is crucial for safe and reliable robot motion planning and control. However, extending stability guarantees to trajectories defined on Riemannian manifolds poses significant challenges due to the manifold's geometric constraints. To address this, we propose a general framework for learning stable dynamical systems on Riemannian manifolds using neural ordinary differential equations. Our method guarantees stability by projecting the neural vector field evolving on the manifold so that it strictly satisfies the Lyapunov stability criterion, ensuring stability at every system state. By leveraging a flexible neural parameterisation for both the base vector field and the Lyapunov function, our framework can accurately represent complex trajectories while respecting manifold constraints by evolving solutions directly on the manifold. We provide an efficient training strategy for applying our framework and demonstrate its utility by solving Riemannian LASA datasets on the unit quaternion (S^3) and symmetric positive-definite matrix manifolds, as well as robotic motions evolving on \mathbb{R}^3 \times S^3. We demonstrate the performance, scalability, and practical applicability of our approach through extensive simulations and by learning robot motions in a real-world experiment.

DeLTa: Demonstration and Language-Guided Novel Transparent Object Manipulation

Authors:Taeyeop Lee, Gyuree Kang, Bowen Wen, Youngho Kim, Seunghyeok Back, In So Kweon, David Hyunchul Shim, Kuk-Jin Yoon
Date:2025-10-07 08:18:29

Despite the prevalence of transparent object interactions in human everyday life, transparent robotic manipulation research remains limited to short-horizon tasks and basic grasping capabilities.Although some methods have partially addressed these issues, most of them have limitations in generalizability to novel objects and are insufficient for precise long-horizon robot manipulation. To address this limitation, we propose DeLTa (Demonstration and Language-Guided Novel Transparent Object Manipulation), a novel framework that integrates depth estimation, 6D pose estimation, and vision-language planning for precise long-horizon manipulation of transparent objects guided by natural task instructions. A key advantage of our method is its single-demonstration approach, which generalizes 6D trajectories to novel transparent objects without requiring category-level priors or additional training. Additionally, we present a task planner that refines the VLM-generated plan to account for the constraints of a single-arm, eye-in-hand robot for long-horizon object manipulation tasks. Through comprehensive evaluation, we demonstrate that our method significantly outperforms existing transparent object manipulation approaches, particularly in long-horizon scenarios requiring precise manipulation capabilities. Project page: https://sites.google.com/view/DeLTa25/

Generative AI-Driven Hierarchical Multi-Agent Framework for Zero-Touch Optical Networks

Authors:Yao Zhang, Yuchen Song, Shengnan Li, Yan Shi, Shikui Shen, Xiongyan Tang, Min Zhang, Danshi Wang
Date:2025-10-07 07:12:52

The rapid development of Generative Artificial Intelligence (GenAI) has catalyzed a transformative technological revolution across all walks of life. As the backbone of wideband communication, optical networks are expecting high-level autonomous operation and zero-touch management to accommodate their expanding network scales and escalating transmission bandwidth. The integration of GenAI is deemed as the pivotal solution for realizing zero-touch optical networks. However, the lifecycle management of optical networks involves a multitude of tasks and necessitates seamless collaboration across multiple layers, which poses significant challenges to the existing single-agent GenAI systems. In this paper, we propose a GenAI-driven hierarchical multi-agent framework designed to streamline multi-task autonomous execution for zero-touch optical networks. We present the architecture, implementation, and applications of this framework. A field-deployed mesh network is utilized to demonstrate three typical scenarios throughout the lifecycle of optical network: quality of transmission estimation in the planning stage, dynamic channel adding/dropping in the operation stage, and system capacity increase in the upgrade stage. The case studies, illustrate the capabilities of multi-agent framework in multi-task allocation, coordination, execution, evaluation, and summarization. This work provides a promising approach for the future development of intelligent, efficient, and collaborative network management solutions, paving the way for more specialized and adaptive zero-touch optical networks.

Redefining Cost Estimation in Database Systems: The Role of Execution Plan Features and Machine Learning

Authors:Utsav Pathak, Amit Mankodi
Date:2025-10-07 06:30:42

Accurate query runtime prediction is a critical component of effective query optimization in modern database systems. Traditional cost models, such as those used in PostgreSQL, rely on static heuristics that often fail to reflect actual query performance under complex and evolving workloads. This remains an active area of research, with recent work exploring machine learning techniques to replace or augment traditional cost estimators. In this paper, we present a machine learning-based framework for predicting SQL query runtimes using execution plan features extracted from PostgreSQL. Our approach integrates scalar and structural features from execution plans and semantic representations of SQL queries to train predictive models. We construct an automated pipeline for data collection and feature extraction using parameterized TPC-H queries, enabling systematic evaluation of multiple modeling techniques. Unlike prior efforts that focus either on cardinality estimation or on synthetic cost metrics, we model the actual runtimes using fine-grained plan statistics and query embeddings derived from execution traces, to improve the model accuracy. We compare baseline regressors, a refined XGBoost model, and a sequential LSTM-based model to assess their effectiveness in runtime prediction. Our dataset includes over 1000 queries generated from TPC-H query templates executed in PostgreSQL with EXPLAIN ANALYZE. Experimental results show that the XGBoost model significantly outperforms others, achieving a mean squared error of 0.3002 and prediction accuracy within 10% of the true runtime in over 65% of cases. The findings highlight the potential of tree-based learning combined with execution plan features for improving cost estimation in query optimizers.

Midway Network: Learning Representations for Recognition and Motion from Latent Dynamics

Authors:Christopher Hoang, Mengye Ren
Date:2025-10-07 04:07:44

Object recognition and motion understanding are key components of perception that complement each other. While self-supervised learning methods have shown promise in their ability to learn from unlabeled data, they have primarily focused on obtaining rich representations for either recognition or motion rather than both in tandem. On the other hand, latent dynamics modeling has been used in decision making to learn latent representations of observations and their transformations over time for control and planning tasks. In this work, we present Midway Network, a new self-supervised learning architecture that is the first to learn strong visual representations for both object recognition and motion understanding solely from natural videos, by extending latent dynamics modeling to this domain. Midway Network leverages a midway top-down path to infer motion latents between video frames, as well as a dense forward prediction objective and hierarchical structure to tackle the complex, multi-object scenes of natural videos. We demonstrate that after pretraining on two large-scale natural video datasets, Midway Network achieves strong performance on both semantic segmentation and optical flow tasks relative to prior self-supervised learning methods. We also show that Midway Network's learned dynamics can capture high-level correspondence via a novel analysis method based on forward feature perturbation.

GO-Flock: Goal-Oriented Flocking in 3D Unknown Environments with Depth Maps

Authors:Yan Rui Tan, Wenqi Liu, Wai Lun Leong, John Guan Zhong Tan, Wayne Wen Huei Yong, Fan Shi, Rodney Swee Huat Teo
Date:2025-10-07 03:46:21

Artificial Potential Field (APF) methods are widely used for reactive flocking control, but they often suffer from challenges such as deadlocks and local minima, especially in the presence of obstacles. Existing solutions to address these issues are typically passive, leading to slow and inefficient collective navigation. As a result, many APF approaches have only been validated in obstacle-free environments or simplified, pseudo 3D simulations. This paper presents GO-Flock, a hybrid flocking framework that integrates planning with reactive APF-based control. GO-Flock consists of an upstream Perception Module, which processes depth maps to extract waypoints and virtual agents for obstacle avoidance, and a downstream Collective Navigation Module, which applies a novel APF strategy to achieve effective flocking behavior in cluttered environments. We evaluate GO-Flock against passive APF-based approaches to demonstrate their respective merits, such as their flocking behavior and the ability to overcome local minima. Finally, we validate GO-Flock through obstacle-filled environment and also hardware-in-the-loop experiments where we successfully flocked a team of nine drones, six physical and three virtual, in a forest environment.

On the equivalence of $c$-potentiability and $c$-path boundedness in the sense of Artstein-Avidan, Sadovsky, and Wyczesany

Authors:Sedi Bartz, Heinz H. Bauschke, Yuan Gao
Date:2025-10-07 03:36:30

A cornerstone of convex analysis, established by Rockafellar in 1966, asserts that a set has a potential if and only if it is cyclically monotone. This characterization was generalized to hold for any real-valued cost function $c$ and lies at the core structure of optimal transport plans. However, this equivalence fails to hold for costs that attain infinite values. In this paper, we explore potentiability for an infinite-valued cost $c$ under the assumption of $c$-path boundedness, a condition that was first introduced by Artstein-Avidan, Sadovsky and Wyczesany. This condition is necessary for potentiability and is more restrictive than $c$-cyclic monotonicity. We provide general settings and other conditions under which $c$-path boundedness is sufficient for potentability, and therefore equivalent. We provide a general theorem for potentiability, requiring no topological assumptions on the spaces or the cost. We then provide sufficiency in separable metric spaces and costs that are continuous in their domain. Finally, we introduce the notion of a $c$-path bounded extension and use it to prove the existence of potentials for a special class of costs on $\mathbb{R}^2$. We illustrate our discussion and results with several examples.

Correlation-Aware Dual-View Pose and Velocity Estimation for Dynamic Robotic Manipulation

Authors:Mahboubeh Zarei, Robin Chhabra, Farrokh Janabi-Sharifi
Date:2025-10-07 02:56:41

Accurate pose and velocity estimation is essential for effective spatial task planning in robotic manipulators. While centralized sensor fusion has traditionally been used to improve pose estimation accuracy, this paper presents a novel decentralized fusion approach to estimate both pose and velocity. We use dual-view measurements from an eye-in-hand and an eye-to-hand vision sensor configuration mounted on a manipulator to track a target object whose motion is modeled as random walk (stochastic acceleration model). The robot runs two independent adaptive extended Kalman filters formulated on a matrix Lie group, developed as part of this work. These filters predict poses and velocities on the manifold $\mathbb{SE}(3) \times \mathbb{R}^3 \times \mathbb{R}^3$ and update the state on the manifold $\mathbb{SE}(3)$. The final fused state comprising the fused pose and velocities of the target is obtained using a correlation-aware fusion rule on Lie groups. The proposed method is evaluated on a UFactory xArm 850 equipped with Intel RealSense cameras, tracking a moving target. Experimental results validate the effectiveness and robustness of the proposed decentralized dual-view estimation framework, showing consistent improvements over state-of-the-art methods.

Learning-based model predictive control with moving horizon state estimation for autonomous racing

Authors:Yassine Kebbati, Andreas Rauh, Naima Ait-Oufroukh, Dalil Ichalal, Vincent Vigneron
Date:2025-10-06 20:48:03

This paper addresses autonomous racing by introducing a real-time nonlinear model predictive controller (NMPC) coupled with a moving horizon estimator (MHE). The racing problem is solved by an NMPC-based off-line trajectory planner that computes the best trajectory while considering the physical limits of the vehicle and circuit constraints. The developed controller is further enhanced with a learning extension based on Gaussian process regression that improves model predictions. The proposed control, estimation, and planning schemes are evaluated on two different race tracks. Code can be found here: https://github.com/yassinekebbati/GP_Learning-based_MPC_with_MHE

Adaptive Dynamics Planning for Robot Navigation

Authors:Lu Yuanjie, Mao Mingyang, Xu Tong, Wang Linji, Lin Xiaomin, Xiao Xuesu
Date:2025-10-06 19:50:16

Autonomous robot navigation systems often rely on hierarchical planning, where global planners compute collision-free paths without considering dynamics, and local planners enforce dynamics constraints to produce executable commands. This discontinuity in dynamics often leads to trajectory tracking failure in highly constrained environments. Recent approaches integrate dynamics within the entire planning process by gradually decreasing its fidelity, e.g., increasing integration steps and reducing collision checking resolution, for real-time planning efficiency. However, they assume that the fidelity of the dynamics should decrease according to a manually designed scheme. Such static settings fail to adapt to environmental complexity variations, resulting in computational overhead in simple environments or insufficient dynamics consideration in obstacle-rich scenarios. To overcome this limitation, we propose Adaptive Dynamics Planning (ADP), a learning-augmented paradigm that uses reinforcement learning to dynamically adjust robot dynamics properties, enabling planners to adapt across diverse environments. We integrate ADP into three different planners and further design a standalone ADP-based navigation system, benchmarking them against other baselines. Experiments in both simulation and real-world tests show that ADP consistently improves navigation success, safety, and efficiency.

Attention-Enhanced Prototypical Learning for Few-Shot Infrastructure Defect Segmentation

Authors:Christina Thrainer, Md Meftahul Ferdaus, Mahdi Abdelguerfi, Christian Guetl, Steven Sloan, Kendall N. Niles, Ken Pathak
Date:2025-10-06 18:33:31

Few-shot semantic segmentation is vital for deep learning-based infrastructure inspection applications, where labeled training examples are scarce and expensive. Although existing deep learning frameworks perform well, the need for extensive labeled datasets and the inability to learn new defect categories with little data are problematic. We present our Enhanced Feature Pyramid Network (E-FPN) framework for few-shot semantic segmentation of culvert and sewer defect categories using a prototypical learning framework. Our approach has three main contributions: (1) adaptive E-FPN encoder using InceptionSepConv blocks and depth-wise separable convolutions for efficient multi-scale feature extraction; (2) prototypical learning with masked average pooling for powerful prototype generation from small support examples; and (3) attention-based feature representation through global self-attention, local self-attention and cross-attention. Comprehensive experimentation on challenging infrastructure inspection datasets illustrates that the method achieves excellent few-shot performance, with the best configuration being 8-way 5-shot training configuration at 82.55% F1-score and 72.26% mIoU in 2-way classification testing. The self-attention method had the most significant performance improvements, providing 2.57% F1-score and 2.9% mIoU gain over baselines. Our framework addresses the critical need to rapidly respond to new defect types in infrastructure inspection systems with limited new training data that lead to more efficient and economical maintenance plans for critical infrastructure systems.

CEPC Technical Design Report -- Reference Detector

Authors:The CEPC Study Group
Date:2025-10-06 18:22:35

The Circular Electron Positron Collider (CEPC) is a large international scientific project initiated by China's particle physicists to study the Higgs boson and perform critical tests of the Standard Model. Housed in a 100-km circumference tunnel in China, the CEPC will primarily operate as a Higgs factory, producing electron-positron collisions at a center-of-mass energy of 240 GeV. It will also function as a Z factory at 91.2 GeV and operate at the WW production threshold (around 160 GeV). In its baseline configuration, the CEPC produces two million Higgs bosons for one experiment. An upgraded scenario would enable higher luminosities, delivering 4.3 million Higgs events across two experiments. The CEPC will also generate trillions of Z bosons, whose subsequent decays will produce vast quantities of bottom quarks, charm quarks, and tau-leptons, establishing it as a high-precision B-factory and tau-charm factory. This document constitutes the second volume of the CEPC Technical Design Report (TDR). It provides a comprehensive description of the CEPC Reference Detector technical design. It summarizes the physics case for the CEPC, details the Reference Detector's technical design and technological options, and highlights its expected performance, demonstrating that the detector meets its design goals. A cost estimate for the Reference Detector and future development plans are also presented. Also included are two additional detector concepts, ILD and IDEA, developed by the international community for future electron-positron colliders and candidates to equip the CEPC's second interaction point. The first TDR volume, published in 2023, details the design of the CEPC accelerator complex. Pending government approval, construction is anticipated to begin around 2027-2028, with an estimated duration of eight years. Physics data-taking is projected to commence in the 2030s.

Factuality Matters: When Image Generation and Editing Meet Structured Visuals

Authors:Le Zhuo, Songhao Han, Yuandong Pu, Boxiang Qiu, Sayak Paul, Yue Liao, Yihao Liu, Jie Shao, Xi Chen, Si Liu, Hongsheng Li
Date:2025-10-06 17:56:55

While modern visual generation models excel at creating aesthetically pleasing natural images, they struggle with producing or editing structured visuals like charts, diagrams, and mathematical figures, which demand composition planning, text rendering, and multimodal reasoning for factual fidelity. To address this, we present the first comprehensive, systematic investigation of this domain, encompassing data construction, model training, and an evaluation benchmark. First, we construct a large-scale dataset of 1.3 million high-quality structured image pairs derived from executable drawing programs and augmented with chain-of-thought reasoning annotations. Building on it, we train a unified model that integrates a VLM with FLUX.1 Kontext via a lightweight connector for enhanced multimodal understanding. A three-stage training curriculum enables progressive feature alignment, knowledge infusion, and reasoning-augmented generation, further boosted by an external reasoner at inference time. Finally, we introduce StructBench, a novel benchmark for generation and editing with over 1,700 challenging instances, and an accompanying evaluation metric, StructScore, which employs a multi-round Q\&A protocol to assess fine-grained factual accuracy. Evaluations of 15 models reveal that even leading closed-source systems remain far from satisfactory. Our model attains strong editing performance, and inference-time reasoning yields consistent gains across diverse architectures. By releasing the dataset, model, and benchmark, we aim to advance unified multimodal foundations for structured visuals.

MICROTRIPS: MICRO-geography TRavel Intelligence and Pattern Synthesis

Authors:Yangyang Wang, Tayo Fabusuyi
Date:2025-10-06 17:50:56

This study presents a novel small-area estimation framework to enhance urban transportation planning through detailed characterization of travel behavior. Our approach improves on the four-step travel model by employing publicly available microdata files and machine learning methods to predict travel behavior for a representative, synthetic population at small geographic areas. This approach enables high-resolution estimation of trip generation, trip distribution, mode choice, and route assignment. Validation using ACS/PUMS work-commute datasets demonstrates that our framework achieves higher accuracy compared to conventional approaches. The resulting granular insights enable the tailoring of interventions to address localized situations and support a range of policy applications and targeted interventions, including the optimal placement of micro-fulfillment centers, effective curb-space management, and the design of more inclusive transportation solutions particularly for vulnerable communities.

Efficient Navigation in Unknown Indoor Environments with Vision-Language Models

Authors:D. Schwartz, K. Kondo, J. P. How
Date:2025-10-06 16:26:16

We present a novel high-level planning framework that leverages vision-language models (VLMs) to improve autonomous navigation in unknown indoor environments with many dead ends. Traditional exploration methods often take inefficient routes due to limited global reasoning and reliance on local heuristics. In contrast, our approach enables a VLM to reason directly about an occupancy map in a zero-shot manner, selecting subgoals that are likely to lead to more efficient paths. At each planning step, we convert a 3D occupancy grid into a partial 2D map of the environment, and generate candidate subgoals. Each subgoal is then evaluated and ranked against other candidates by the model. We integrate this planning scheme into DYNUS \cite{kondo2025dynus}, a state-of-the-art trajectory planner, and demonstrate improved navigation efficiency in simulation. The VLM infers structural patterns (e.g., rooms, corridors) from incomplete maps and balances the need to make progress toward a goal against the risk of entering unknown space. This reduces common greedy failures (e.g., detouring into small rooms) and achieves about 10\% shorter paths on average.

Selection Bias in Hybrid Randomized Controlled Trials using External Controls: A Simulation Study

Authors:Han Chang Chiam, Franz König, Martin Posch
Date:2025-10-06 14:13:30

Hybrid randomized controlled trials (hybrid RCTs) integrate external control data, such as historical or concurrent data, with data from randomized trials. While numerous frequentist and Bayesian methods, such as the test-then-pool and Meta-Analytic-Predictive prior, have been developed to account for potential disagreement between the external control and randomized data, they cannot ensure strict type I error rate control. However, these methods can reduce biases stemming from systematic differences between external controls and trial data. A critical yet underexplored issue in hybrid RCTs is the prespecification of external data to be used in analysis. The validity of statistical conclusions in hybrid RCTs depends on the assumption that external control selection is independent of historical trials outcomes. In practice, historical data may be accessible during the planning stage, potentially influencing important decisions, such as which historical datasets to include or the sample size of the prospective part of the hybrid trial, thus introducing bias. Such data-driven design choices can be an additional source of bias, which can occur even when historical and prospective controls are exchangeable. Through a simulation study, we quantify the biases introduced by outcome-dependent selection of historical controls in hybrid RCTs using both Bayesian and frequentist approaches, and discuss potential strategies to mitigate this bias. Our scenarios consider variability and time trends in the historical studies, distributional shifts between historical and prospective control groups, sample sizes and allocation ratios, as well as the number of studies included. The impact of different rules for selecting external controls is demonstrated using a clinical trial example.

Efficient Probabilistic Planning with Maximum-Coverage Distributionally Robust Backward Reachable Trees

Authors:Alex Rose, Naman Aggarwal, Christopher Jewison, Jonathan P. How
Date:2025-10-06 13:46:55

This paper presents a new multi-query motion planning algorithm for linear Gaussian systems with the goal of reaching a Euclidean ball with high probability. We develop a new formulation for ball-shaped ambiguity sets of Gaussian distributions and leverage it to develop a distributionally robust belief roadmap construction algorithm. This algorithm synthe- sizes robust controllers which are certified to be safe for maximal size ball-shaped ambiguity sets of Gaussian distributions. Our algorithm achieves better coverage than the maximal coverage algorithm for planning over Gaussian distributions [1], and we identify mild conditions under which our algorithm achieves strictly better coverage. For the special case of no process noise or state constraints, we formally prove that our algorithm achieves maximal coverage. In addition, we present a second multi-query motion planning algorithm for linear Gaussian systems with the goal of reaching a region parameterized by the Minkowski sum of an ellipsoid and a Euclidean ball with high probability. This algorithm plans over ellipsoidal sets of maximal size ball-shaped ambiguity sets of Gaussian distributions, and provably achieves equal or better coverage than the best-known algorithm for planning over ellipsoidal ambiguity sets of Gaussian distributions [2]. We demonstrate the efficacy of both methods in a wide range of conditions via extensive simulation experiments.

Hands-Free Heritage: Automated 3D Scanning for Cultural Heritage Digitization

Authors:Javed Ahmad, Federico Dassiè, Selene Frascella, Gabriele Marchello, Ferdinando Cannella, Arianna Traviglia
Date:2025-10-06 12:58:41

High-fidelity 3D scanning is essential for preserving cultural heritage artefacts, supporting documentation, analysis, and long-term conservation. However, conventional methods typically require specialized expertise and manual intervention to maintain optimal scanning conditions and coverage. We present an automated two-robot scanning system that eliminates the need for handheld or semi-automatic workflows by combining coordinated robotic manipulation with high-resolution 3D scanning. Our system parameterizes the scanning space into distinct regions, enabling coordinated motion planning between a scanner-equipped robot and a tray-handling robot. Optimized trajectory planning and waypoint distribution ensure comprehensive surface coverage, minimize occlusions, and balance reconstruction accuracy with system efficiency. Experimental results show that our approach achieves significantly lower Chamfer Distance and higher F-score compared to baseline methods, offering superior geometric accuracy, improved digitization efficiency, and reduced reliance on expert operators.

Building Gradient by Gradient: Decentralised Energy Functions for Bimanual Robot Assembly

Authors:Alexander L. Mitchell, Joe Watson, Ingmar Posner
Date:2025-10-06 11:10:11

There are many challenges in bimanual assembly, including high-level sequencing, multi-robot coordination, and low-level, contact-rich operations such as component mating. Task and motion planning (TAMP) methods, while effective in this domain, may be prohibitively slow to converge when adapting to disturbances that require new task sequencing and optimisation. These events are common during tight-tolerance assembly, where difficult-to-model dynamics such as friction or deformation require rapid replanning and reattempts. Moreover, defining explicit task sequences for assembly can be cumbersome, limiting flexibility when task replanning is required. To simplify this planning, we introduce a decentralised gradient-based framework that uses a piecewise continuous energy function through the automatic composition of adaptive potential functions. This approach generates sub-goals using only myopic optimisation, rather than long-horizon planning. It demonstrates effectiveness at solving long-horizon tasks due to the structure and adaptivity of the energy function. We show that our approach scales to physical bimanual assembly tasks for constructing tight-tolerance assemblies. In these experiments, we discover that our gradient-based rapid replanning framework generates automatic retries, coordinated motions and autonomous handovers in an emergent fashion.

Watch and Learn: Learning to Use Computers from Online Videos

Authors:Chan Hee Song, Yiwen Song, Palash Goyal, Yu Su, Oriana Riva, Hamid Palangi, Tomas Pfister
Date:2025-10-06 10:29:00

Computer use agents (CUAs) need to plan task workflows grounded in diverse, ever-changing applications and environments, but learning is hindered by the scarcity of large-scale, high-quality training data in the target application. Existing datasets are domain-specific, static, and costly to annotate, while current synthetic data generation methods often yield simplistic or misaligned task demonstrations. To address these limitations, we introduce Watch & Learn (W&L), a framework that converts human demonstration videos readily available on the Internet into executable UI trajectories at scale. Instead of directly generating trajectories or relying on ad hoc reasoning heuristics, we cast the problem as an inverse dynamics objective: predicting the user's action from consecutive screen states. This formulation reduces manual engineering, is easier to learn, and generalizes more robustly across applications. Concretely, we develop an inverse dynamics labeling pipeline with task-aware video retrieval, generate over 53k high-quality trajectories from raw web videos, and demonstrate that these trajectories improve CUAs both as in-context demonstrations and as supervised training data. On the challenging OSWorld benchmark, UI trajectories extracted with W&L consistently enhance both general-purpose and state-of-the-art frameworks in-context, and deliver stronger gains for open-source models under supervised training. These results highlight web-scale human demonstration videos as a practical and scalable foundation for advancing CUAs towards real-world deployment.

Design Process of a Self Adaptive Smart Serious Games Ecosystem

Authors:X. Tao, P. Chen, M. Tsami, F. Khayati, M. Eckert
Date:2025-10-06 09:28:31

This paper outlines the design vision and planned evolution of Blexer v3, a modular and AI-driven rehabilitation ecosystem based on serious games. Building on insights from previous versions of the system, we propose a new architecture that aims to integrate multimodal sensing, real-time reasoning, and intelligent control. The envisioned system will include distinct modules for data collection, user state inference, and gameplay adaptation. Key features such as dynamic difficulty adjustment (DDA) and procedural content generation (PCG) are also considered to support personalized interventions. We present the complete conceptual framework of Blexer v3, which defines the modular structure and data flow of the system. This serves as the foundation for the next phase: the development of a functional prototype and its integration into clinical rehabilitation scenarios.