This paper presents data-driven approaches for integrated assortment planning and inventory allocation that significantly improve fulfillment efficiency at JD.com, a leading E-commerce company. JD.com uses a two-level distribution network that includes regional distribution centers (RDCs) and front distribution centers (FDCs). Selecting products to stock at FDCs and then optimizing daily inventory allocation from RDCs to FDCs is critical to improving fulfillment efficiency, which is crucial for enhancing customer experiences. For assortment planning, we propose efficient algorithms to maximize the number of orders that can be fulfilled by FDCs (local fulfillment). For inventory allocation, we develop a novel end-to-end algorithm that integrates forecasting, optimization, and simulation to minimize lost sales and inventory transfer costs. Numerical experiments demonstrate that our methods outperform existing approaches, increasing local order fulfillment rates by 0.54% and our inventory allocation algorithm increases FDC demand satisfaction rates by 1.05%. Considering the high-volume operations of JD.com, with millions of weekly orders per region, these improvements yield substantial benefits beyond the company's established supply chain system. Implementation across JD.com's network has reduced costs, improved stock availability, and increased local order fulfillment rates for millions of orders annually.
Engineering models created in Model-Based Systems Engineering (MBSE) environments contain detailed information about system structure and behavior. However, they typically lack symbolic planning semantics such as preconditions, effects, and constraints related to resource availability and timing. This limits their ability to evaluate whether a given system variant can fulfill specific tasks and how efficiently it performs compared to alternatives. To address this gap, this paper presents a model-driven method that enables the specification and automated generation of symbolic planning artifacts within SysML-based engineering models. A dedicated SysML profile introduces reusable stereotypes for core planning constructs. These are integrated into existing model structures and processed by an algorithm that generates a valid domain file and a corresponding problem file in Planning Domain Definition Language (PDDL). In contrast to previous approaches that rely on manual transformations or external capability models, the method supports native integration and maintains consistency between engineering and planning artifacts. The applicability of the method is demonstrated through a case study from aircraft assembly. The example illustrates how existing engineering models are enriched with planning semantics and how the proposed workflow is applied to generate consistent planning artifacts from these models. The generated planning artifacts enable the validation of system variants through AI planning.
Cone-Beam Computed Tomography (CBCT) is a widely used 3D imaging technique in dentistry, providing volumetric information about the anatomical structures of jaws and teeth. Accurate segmentation of these anatomies is critical for clinical applications such as diagnosis and surgical planning, but remains time-consuming and challenging. In this paper, we present U-Mamba2, a new neural network architecture designed for multi-anatomy CBCT segmentation in the context of the ToothFairy3 challenge. U-Mamba2 integrates the Mamba2 state space models into the U-Net architecture, enforcing stronger structural constraints for higher efficiency without compromising performance. In addition, we integrate interactive click prompts with cross-attention blocks, pre-train U-Mamba2 using self-supervised learning, and incorporate dental domain knowledge into the model design to address key challenges of dental anatomy segmentation in CBCT. Extensive experiments, including independent tests, demonstrate that U-Mamba2 is both effective and efficient, securing top 3 places in both tasks of the Toothfairy3 challenge. In Task 1, U-Mamba2 achieved a mean Dice of 0.792, HD95 of 93.19 with the held-out test data, with an average inference time of XX (TBC during the ODIN workshop). In Task 2, U-Mamba2 achieved the mean Dice of 0.852 and HD95 of 7.39 with the held-out test data. The code is publicly available at https://github.com/zhiqin1998/UMamba2.
The fine-grained surface reconstruction of different organs from 3D medical imaging can provide advanced diagnostic support and improved surgical planning. However, the representation of the organs is often limited by the resolution, with a detailed higher resolution requiring more memory and computing footprint. Implicit representations of objects have been proposed to alleviate this problem in general computer vision by providing compact and differentiable functions to represent the 3D object shapes. However, architectural and data-related differences prevent the direct application of these methods to medical images. This work introduces ImplMORe, an end-to-end deep learning method using implicit surface representations for multi-organ reconstruction from 3D medical images. ImplMORe incorporates local features using a 3D CNN encoder and performs multi-scale interpolation to learn the features in the continuous domain using occupancy functions. We apply our method for single and multiple organ reconstructions using the totalsegmentator dataset. By leveraging the continuous nature of occupancy functions, our approach outperforms the discrete explicit representation based surface reconstruction approaches, providing fine-grained surface details of the organ at a resolution higher than the given input image. The source code will be made publicly available at: https://github.com/CAMMA-public/ImplMORe
We study the problem of generalizing an expert agent's behavior, provided through demonstrations, to new environments and/or additional constraints. Inverse Reinforcement Learning (IRL) offers a promising solution by seeking to recover the expert's underlying reward function, which, if used for planning in the new settings, would reproduce the desired behavior. However, IRL is inherently ill-posed: multiple reward functions, forming the so-called feasible set, can explain the same observed behavior. Since these rewards may induce different policies in the new setting, in the absence of additional information, a decision criterion is needed to select which policy to deploy. In this paper, we propose a novel, principled criterion that selects the "average" policy among those induced by the rewards in a certain bounded subset of the feasible set. Remarkably, we show that this policy can be obtained by planning with the reward centroid of that subset, for which we derive a closed-form expression. We then present a provably efficient algorithm for estimating this centroid using an offline dataset of expert demonstrations only. Finally, we conduct numerical simulations that illustrate the relationship between the expert's behavior and the behavior produced by our method.
Diffusion-based planners have gained significant recent attention for their robustness and performance in long-horizon tasks. However, most existing planners rely on a fixed, pre-specified horizon during both training and inference. This rigidity often produces length-mismatch (trajectories that are too short or too long) and brittle performance across instances with varying geometric or dynamical difficulty. In this paper, we introduce the Variable Horizon Diffuser (VHD) framework, which treats the horizon as a learned variable rather than a fixed hyperparameter. Given a start-goal pair, we first predict an instance-specific horizon using a learned Length Predictor model, which guides a Diffusion Planner to generate a trajectory of the desired length. Our design maintains compatibility with existing diffusion planners by controlling trajectory length through initial noise shaping and training on randomly cropped sub-trajectories, without requiring architectural changes. Empirically, VHD improves success rates and path efficiency in maze-navigation and robot-arm control benchmarks, showing greater robustness to horizon mismatch and unseen lengths, while keeping training simple and offline-only.
This study develops and evaluates a novel hybridWavelet SARIMA Transformer, WST framework to forecast using monthly rainfall across five meteorological subdivisions of Northeast India over the 1971 to 2023 period. The approach employs the Maximal Overlap Discrete Wavelet Transform, MODWT with four wavelet families such as, Haar, Daubechies, Symlet, Coiflet etc. to achieve shift invariant, multiresolution decomposition of the rainfall series. Linear and seasonal components are modeled using Seasonal ARIMA, SARIMA, while nonlinear components are modeled by a Transformer network, and forecasts are reconstructed via inverse MODWT. Comprehensive validation using an 80 is to 20 train test split and multiple performance indices such as, RMSE, MAE, SMAPE, Willmotts d, Skill Score, Percent Bias, Explained Variance, and Legates McCabes E1 demonstrates the superiority of the Haar-based hybrid model, WHST. Across all subdivisions, WHST consistently achieved lower forecast errors, stronger agreement with observed rainfall, and unbiased predictions compared with stand alone SARIMA, stand-alone Transformer, and two-stage wavelet hybrids. Residual adequacy was confirmed through the Ljung Box test, while Taylor diagrams provided an integrated assessment of correlation, variance fidelity, and RMSE, further reinforcing the robustness of the proposed approach. The results highlight the effectiveness of integrating multiresolution signal decomposition with complementary linear and deep learning models for hydroclimatic forecasting. Beyond rainfall, the proposed WST framework offers a scalable methodology for forecasting complex environmental time series, with direct implications for flood risk management, water resources planning, and climate adaptation strategies in data-sparse and climate-sensitive regions.
We propose an O(100)m Atom Interferometer (AI) experiment to be installed against a wall of the PX46 access shaft to the LHC. This experiment would probe unexplored ranges of the possible couplings of bosonic ultralight dark matter (ULDM) to atomic constituents and undertake a pioneering search for gravitational waves (GWs) at frequencies intermediate between those to which existing and planned experiments are sensitive, among other fundamental physics studies. A conceptual feasibility study showed that this AI experiment could be isolated from the LHC by installing a shielding wall in the TX46 gallery, and surveyed issues related to the proximity of the LHC machine, finding no technical obstacles. A detailed technical implementation study has shown that the preparatory civil-engineering work, installation of bespoke radiation shielding, deployment of access-control systems and safety alarms, and installation of an elevator platform could be carried out during LS3, allowing installation and operation of the detector to proceed during Run 4 without impacting HL-LHC operation. These studies have established that PX46 is a uniquely promising location for an AI experiment. We foresee that, if the CERN management encourages this Letter of Intent, a significant fraction of the Terrestrial Very Long Baseline Atom Interferometer (TVLBAI) Proto-Collaboration may wish to contribute to such an AI experiment.
This paper presents UniPilot, a compact hardware-software autonomy payload that can be integrated across diverse robot embodiments to enable autonomous operation in GPS-denied environments. The system integrates a multi-modal sensing suite including LiDAR, radar, vision, and inertial sensing for robust operation in conditions where uni-modal approaches may fail. UniPilot runs a complete autonomy software comprising multi-modal perception, exploration and inspection path planning, and learning-based navigation policies. The payload provides robust localization, mapping, planning, and safety and control capabilities in a single unit that can be deployed across a wide range of platforms. A large number of experiments are conducted across diverse environments and on a variety of robot platforms to validate the mapping, planning, and safe navigation capabilities enabled by the payload.
Autonomous stocking in retail environments, particularly supermarkets, presents challenges due to dynamic human interactions, constrained spaces, and diverse product geometries. This paper introduces an efficient end-to-end robotic system for autonomous shelf stocking and fronting, integrating commercially available hardware with a scalable algorithmic architecture. A major contribution of this work is the system integration of off-the-shelf hardware and ROS2-based perception, planning, and control into a single deployable platform for retail environments. Our solution leverages Behavior Trees (BTs) for task planning, fine-tuned vision models for object detection, and a two-step Model Predictive Control (MPC) framework for precise shelf navigation using ArUco markers. Laboratory experiments replicating realistic supermarket conditions demonstrate reliable performance, achieving over 98% success in pick-and-place operations across a total of more than 700 stocking events. However, our comparative benchmarks indicate that the performance and cost-effectiveness of current autonomous systems remain inferior to that of human workers, which we use to highlight key improvement areas and quantify the progress still required before widespread commercial deployment can realistically be achieved.
The emergence of the Braess' paradox in road traffic systems demonstrates the positive effect of transportation planning in improving efficiency. By contrast, the phenomenon has rarely been examined in pedestrian evacuation traffic. Yet the possibility that Braess' paradox could lengthen evacuation times under hazardous conditions has received little systematic attention. In this paper, we investigate Braess' paradox in pedestrian evacuation traffic through a series of supervised experiments and corresponding traffic assignment models to examine its potential occurrence. Our empirical and modeling results indicate that Braess' paradox is unlikely to be a prevalent phenomenon in pedestrian traffic systems. Specifically, under autonomous evacuation and the assumption of complete network knowledge, the paradox does not arise in high-demand evacuation contexts in our case studies. Under a more realistic assumption of limited network knowledge, however, the paradox can occur. These findings highlight the importance of information conditions for evacuation performance and provide guidance for the design and management of large public venues.
Traditional disaster analysis and modelling tools for assessing the severity of a disaster are predictive in nature. Based on the past observational data, these tools prescribe how the current input state (e.g., environmental conditions, situation reports) results in a severity assessment. However, these systems are not meant to be interventional in the causal sense, where the user can modify the current input state to simulate counterfactual "what-if" scenarios. In this work, we provide an alternative interventional tool that complements traditional disaster modelling tools by leveraging real-time data sources like satellite imagery, news, and social media. Our tool also helps understand the causal attribution of different factors on the estimated severity, over any given region of interest. In addition, we provide actionable recourses that would enable easier mitigation planning. Our source code is publicly available.
This paper formulates the Embodied Questions Answering (EQsA) problem, introduces a corresponding benchmark, and proposes a system to tackle the problem. Classical Embodied Question Answering (EQA) is typically formulated as answering one single question by actively exploring a 3D environment. Real deployments, however, often demand handling multiple questions that may arrive asynchronously and carry different urgencies. We formalize this setting as Embodied Questions Answering (EQsA) and present ParaEQsA, a framework for parallel, urgency-aware scheduling and answering. ParaEQsA leverages a group memory module shared among questions to reduce redundant exploration, and a priority-planning module to dynamically schedule questions. To evaluate this setting, we contribute the Parallel Asynchronous Embodied Questions (PAEQs) benchmark containing 40 indoor scenes and five questions per scene (200 in total), featuring asynchronous follow-up questions and urgency labels. We further propose metrics for EQsA performance: Direct Answer Rate (DAR), and Normalized Urgency-Weighted Latency (NUWL), which jointly measure efficiency and responsiveness of this system. ParaEQsA consistently outperforms strong sequential baselines adapted from recent EQA systems, while reducing exploration and delay. Empirical evaluations investigate the relative contributions of priority, urgency modeling, spatial scope, reward estimation, and dependency reasoning within our framework. Together, these results demonstrate that urgency-aware, parallel scheduling is key to making embodied agents responsive and efficient under realistic, multi-question workloads.
Assessing the impact of a disaster in terms of asset losses and human casualties is essential for preparing effective response plans. Traditional methods include offline assessments conducted on the ground, where volunteers and first responders work together to collect the estimate of losses through windshield surveys or on-ground inspection. However, these methods have a time delay and are prone to different biases. Recently, various online data sources, including social media, news reports, aerial imagery, and satellite data, have been utilized to evaluate the impact of disasters. Online data sources provide real-time data streams for estimating the offline impact. Limited research exists on how different online sources help estimate disaster impact at a given administrative unit. In our work, we curate a comprehensive dataset by collecting data from multiple online sources for a few billion-dollar disasters at the county level. We also analyze how online estimates compare with traditional offline-based impact estimates for the disaster. Our findings provide insight into how different sources can provide complementary information to assess the disaster.
With the extension of China high-speed railway network, the number of railway stations is increasing. The challenge that railway companies are facing is how to reasonably plan the location of hot standby Electric Multiple Units (EMU) and determine the rescue coverage area of each hot standby EMU. It is uneconomical to configure a hot standby EMU for each station. Railway companies want to use the minimum number of hot standby EMUs to provide rescue tasks for the entire high-speed railway network. In this paper, we develop the optimization models for hot standby EMU location and coverage area, respectively. The models aim to maximize the rescue distance and minimize the number of hot standby EMUs. Wha'ts more, in order to reduce the complexity of the models, a method of merging railway lines is used to transform the "point-to-network" to "point-to-point" rescue problem. Two numerical examples are carried to demonstrate the effectiveness of the models. The developed models are linear, and we use Python 3.7 to call Gurobi 9.5.2 to solve the models. In the results, we can find that the developed models can provide an optimal and reasonable plan for the location and coverage area of hot standby EMUs.
The rapid development of the low-altitude economy has imposed unprecedented demands on wireless infrastructure to accommodate large-scale drone deployments and facilitate intelligent services in dynamic airspace environments. However, unlocking its full potential in practical applications presents significant challenges. Traditional aerial systems predominantly focus on air-ground communication services, often neglecting the integration of sensing, computation, control, and energy-delivering functions, which hinders the ability to meet diverse mission-critical demands. Besides, the absence of systematic low-altitude airspace planning and management exacerbates issues regarding dynamic interference in three-dimensional space, coverage instability, and scalability. To overcome these challenges, a comprehensive framework, termed low-altitude wireless network (LAWN), has emerged to seamlessly integrate communication, sensing, computation, control, and air traffic management into a unified design. This article provides a comprehensive overview of LAWN systems, introducing LAWN system fundamentals and the evolution of functional designs. Subsequently, we delve into performance evaluation metrics and review critical concerns surrounding privacy and security in the open-air network environment. Finally, we present the cutting-edge developments in airspace structuring and air traffic management, providing insights to facilitate the practical deployment of LAWNs.
Solving sparse systems of linear equations is a fundamental problem in the field of numerical methods, with applications spanning from circuit design to urban planning. These problems can have millions of constraints, such as when laying out transistors on a circuit, or trying to optimize traffic light timings, making fast sparse solvers extremely important. However, existing state-of-the-art software-level solutions for solving sparse linear systems, termed iterative solvers, are extremely inefficient on current hardware. This inefficiency can be attributed to two key reasons: (1) poor short-term data reuse, which causes frequent, irregular memory accesses, and (2) complex data dependencies, which limit parallelism. Hence, in this paper, we present an FPGA implementation of the existing Azul accelerator, an SRAM-only hardware accelerator that achieves both high memory bandwidth utilization and arithmetic intensity. Azul features a grid of tiles, each of which is composed of a processing element (PE) and a small independent SRAM memory, which are all connected over a network on chip (NoC). We implement Azul on FPGA using simple RISC-V CPU cores connected to a memory hierarchy of different FPGA memory modules. We utilize custom RISC-V ISA augmentations to implement a task-based programming model for the various PEs, allowing communication over the NoC. Finally, we design simple distributed test cases so that we can functionally verify the FPGA implementation, verifying equivalent performance to an architectural simulation of the Azul framework.
Electron-positron colliders operating in the GeV region of center-of-mass energies or the Tau-Charm energy region, have been proven to enable competitive frontier research, due to its several unique features. With the progress of high energy physics in the last two decades, a new-generation Tau-Charm factory, Super Tau Charm Facility (STCF) has been actively promoting by the particle physics community in China. STCF holds great potential to address fundamental questions such as the essence of color confinement and the matter-antimatter asymmetry in the universe in the next decades. The main design goals of STCF are with a center-of-mass energy ranging from 2 to 7 GeV and a peak luminosity surpassing 5*10^34 cm^-2s^-1 that is optimized at a center-of-mass energy of 4 GeV, which is about 50 times that of the currently operating Tau-Charm factory - BEPCII. The STCF accelerator is composed of two main parts: a double-ring collider with the crab-waist collision scheme and an injector that provides top-up injections for both electron and positron beams. As a typical third-generation electron-positron circular collider, the STCF accelerator faces many challenges in both accelerator physics and technology. In this paper, the conceptual design of the STCF accelerator complex is presented, including the ongoing efforts and plans for technological R&D, as well as the required infrastructure. The STCF project aims to secure support from the Chinese central government for its construction during the 15th Five-Year Plan (2026-2030) in China.
Box/cabinet scenarios with stacked objects pose significant challenges for robotic motion due to visual occlusions and constrained free space. Traditional collision-free trajectory planning methods often fail when no collision-free paths exist, and may even lead to catastrophic collisions caused by invisible objects. To overcome these challenges, we propose an operational aware interactive motion planner (PaiP) a real-time closed-loop planning framework utilizing multimodal tactile perception. This framework autonomously infers object interaction features by perceiving motion effects at interaction interfaces. These interaction features are incorporated into grid maps to generate operational cost maps. Building upon this representation, we extend sampling-based planning methods to interactive planning by optimizing both path cost and operational cost. Experimental results demonstrate that PaiP achieves robust motion in narrow spaces.
This paper estimates the effect of Hurricane Harvey on wages and employment in the construction labor industry across impacted counties in Texas. Based on data from the Quarterly Census of Employment and Wages (QCEW) for the period 2016-2019, I adopted a difference-in-differences event study approach by comparing results in 41 FEMA-designated disaster counties with a set of unaffected southern control counties. I find that Hurricane Harvey had a large and long-lasting impact on labor market outcomes in the construction industry. More precisely, average log wages in treated counties rose by around 7.2 percent compared to control counties two quarters after the hurricane and remained high for the next two years. Employment effects were more gradual, showing a statistically significant increase only after six quarters, in line with the lagged nature of large-scale reconstruction activities. These results imply that natural disasters can generate persistent labor demand shocks to local construction markets, with policy implications for disaster recovery planning and workforce mobilization.
Text-to-image diffusion models help visualize urban futures but can amplify group-level harms. We propose collective recourse: structured community "visual bug reports" that trigger fixes to models and planning workflows. We (1) formalize collective recourse and a practical pipeline (report, triage, fix, verify, closure); (2) situate four recourse primitives within the diffusion stack: counter-prompts, negative prompts, dataset edits, and reward-model tweaks; (3) define mandate thresholds via a mandate score combining severity, volume saturation, representativeness, and evidence; and (4) evaluate a synthetic program of 240 reports. Prompt-level fixes were fastest (median 2.1-3.4 days) but less durable (21-38% recurrence); dataset edits and reward tweaks were slower (13.5 and 21.9 days) yet more durable (12-18% recurrence) with higher planner uptake (30-36%). A threshold of 0.12 yielded 93% precision and 75% recall; increasing representativeness raised recall to 81% with little precision loss. We discuss integration with participatory governance, risks (e.g., overfitting to vocal groups), and safeguards (dashboards, rotating juries).
Active object detection, which aims to identify objects of interest through controlled camera movements, plays a pivotal role in real-world visual perception for autonomous robotic applications, such as manufacturing tasks (e.g., assembly operations) performed in unknown environments. A dual control for exploration and exploitation (DCEE) algorithm is presented within goal-oriented control systems to achieve efficient active object detection, leveraging active learning by incorporating variance-based uncertainty estimation in the cost function. This novel method employs an exploration-exploitation balanced cost function to actively guide the selection of the next viewpoint. Specifically, active object detection is achieved through the development of a reward function that encodes knowledge about the confidence variation of objects as a function of viewpoint position within a given domain. By identifying the unknown parameters of this function, the system generates an optimal viewpoint planning strategy. DCEE integrates parameter estimation of the reward function and view planning, ensuring a balanced trade-off between the exploitation of learned knowledge and active exploration during the planning process. Moreover, it demonstrates remarkable adaptability across diverse scenarios, effectively handling LEGO brick detection at varying locations. Importantly, the algorithm maintains consistent configuration settings and a fixed number of parameters across various scenarios, underscoring its efficiency and robustness. To validate the proposed approach, extensive numerical studies, high-fidelity virtual simulations, and real-world experiments under various scenarios were conducted. The results confirm the effectiveness of DCEE in active object detection, showcasing superior performance compared to existing methods, including model predictive control (MPC) and entropy approaches.
This study presents the first multi-platform sentiment analysis of public opinion on the 15-minute city concept across Twitter, Reddit, and news media. Using compressed transformer models and Llama-3-8B for annotation, we classify sentiment across heterogeneous text domains. Our pipeline handles long-form and short-form text, supports consistent annotation, and enables reproducible evaluation. We benchmark five models (DistilRoBERTa, DistilBERT, MiniLM, ELECTRA, TinyBERT) using stratified 5-fold cross-validation, reporting F1-score, AUC, and training time. DistilRoBERTa achieved the highest F1 (0.8292), TinyBERT the best efficiency, and MiniLM the best cross-platform consistency. Results show News data yields inflated performance due to class imbalance, Reddit suffers from summarization loss, and Twitter offers moderate challenge. Compressed models perform competitively, challenging assumptions that larger models are necessary. We identify platform-specific trade-offs and propose directions for scalable, real-world sentiment classification in urban planning discourse.
The Wasserstein barycenter extends the Euclidean mean to the space of probability measures by minimizing the weighted sum of squared 2-Wasserstein distances. We develop a free-support algorithm for computing Wasserstein barycenters that avoids entropic regularization and instead follows the formal Riemannian geometry of Wasserstein space. In our approach, barycenter atoms evolve as particles advected by averaged optimal-transport displacements, with barycentric projections of optimal transport plans used in place of Monge maps when the latter do not exist. This yields a geometry-aware particle-flow update that preserves sharp features of the Wasserstein barycenter while remaining computationally tractable. We establish theoretical guarantees, including consistency of barycentric projections, monotone descent and convergence to stationary points, stability with respect to perturbations of the inputs, and resolution consistency as the number of atoms increases. Empirical studies on averaging probability distributions, Bayesian posterior aggregation, image prototypes and classification, and large-scale clustering demonstrate accuracy and scalability of the proposed particle-flow approach, positioning it as a principled alternative to both linear programming and regularized solvers.
Saltwater Intrusion (SWI) threatens freshwater availability, agriculture, and ecosystem resilience in coastal regions. While sea-level rise (SLR) is a known driver of long-term salinization, the counteracting role of freshwater discharge remains underexamined. Here, we combine long-term observations with numerical modeling and machine learning reconstruction to quantify the buffering capacity of freshwater outflows across the U.S. coastline. In systems such as Delaware Bay and parts of the Gulf and South Atlantic coasts, the salt front has shifted seaward in recent decades, linked to increased discharge, despite SLR over that time period. We show that a 10 - 35% increase in freshwater flow can offset the salinity impact of 0.5 m of SLR, though regional variation is significant. With future discharge trends diverging spatially, SWI responses will be highly uneven. These results highlight the critical role of freshwater management in mitigating salinity risks under climate change, with implications for water resource resilience, coastal planning, and long-term adaptation strategies.
Classical reinforcement learning (RL) methods often struggle in complex, high-dimensional environments because of their extensive parameter requirements and challenges posed by stochastic, non-deterministic settings. This study introduces quantum deep reinforcement learning (QDRL) to train humanoid agents efficiently. While previous quantum RL models focused on smaller environments, such as wheeled robots and robotic arms, our work pioneers the application of QDRL to humanoid robotics, specifically in environments with substantial observation and action spaces, such as MuJoCo's Humanoid-v4 and Walker2d-v4. Using parameterized quantum circuits, we explored a hybrid quantum-classical setup to directly navigate high-dimensional state spaces, bypassing traditional mapping and planning. By integrating quantum computing with deep RL, we aim to develop models that can efficiently learn complex navigation tasks in humanoid robots. We evaluated the performance of the Soft Actor-Critic (SAC) in classical RL against its quantum implementation. The results show that the quantum SAC achieves an 8% higher average return (246.40) than the classical SAC (228.36) after 92% fewer steps, highlighting the accelerated learning potential of quantum computing in RL tasks.
Social robots offer a promising solution for autonomously guiding patients through physiotherapy exercise sessions, but effective deployment requires advanced decision-making to adapt to patient needs. A key challenge is the scarcity of patient behavior data for developing robust policies. To address this, we engaged 33 expert healthcare practitioners as patient proxies, using their interactions with our robot to inform a patient behavior model capable of generating exercise performance metrics and subjective scores on perceived exertion. We trained a reinforcement learning-based policy in simulation, demonstrating that it can adapt exercise instructions to individual exertion tolerances and fluctuating performance, while also being applicable to patients at different recovery stages with varying exercise plans.
With the rapid development of the new energy vehicle industry, the efficient disassembly and recycling of power batteries have become a critical challenge for the circular economy. In current unstructured disassembly scenarios, the dynamic nature of the environment severely limits the robustness of robotic perception, posing a significant barrier to autonomous disassembly in industrial applications. This paper proposes a continual learning framework based on Neuro-Symbolic task and motion planning (TAMP) to enhance the adaptability of embodied intelligence systems in dynamic environments. Our approach integrates a multimodal perception cross-validation mechanism into a bidirectional reasoning flow: the forward working flow dynamically refines and optimizes action strategies, while the backward learning flow autonomously collects effective data from historical task executions to facilitate continual system learning, enabling self-optimization. Experimental results show that the proposed framework improves the task success rate in dynamic disassembly scenarios from 81.68% to 100%, while reducing the average number of perception misjudgments from 3.389 to 1.128. This research provides a new paradigm for enhancing the robustness and adaptability of embodied intelligence in complex industrial environments.
Reinforcement learning (RL) has shown promise in a large number of robotic control tasks. Nevertheless, its deployment on unmanned aerial vehicles (UAVs) remains challenging, mainly because of reliance on accurate dynamic models and platform-specific sensing, which hinders cross-platform transfer. This paper presents the CORB-Planner (Corridor-as-Observations for RL B-spline planner), a real-time, RL-based trajectory planning framework for high-speed autonomous UAV flight across heterogeneous platforms. The key idea is to combine B-spline trajectory generation with the RL policy producing successive control points with a compact safe flight corridor (SFC) representation obtained via heuristic search. The SFC abstracts obstacle information in a low-dimensional form, mitigating overfitting to platform-specific details and reducing sensitivity to model inaccuracies. To narrow the sim-to-real gap, we adopt an easy-to-hard progressive training pipeline in simulation. A value-based soft decomposed-critic Q (SDCQ) algorithm is used to learn effective policies within approximately ten minutes of training. Benchmarks in simulation and real-world tests demonstrate real-time planning on lightweight onboard hardware and support maximum flight speeds up to 8.2m/s in dense, cluttered environments without external positioning. Compatibility with various UAV configurations (quadrotors, hexarotors) and modest onboard compute underlines the generality and robustness of CORB-Planner for practical deployment.
We present TransZero, a model-based reinforcement learning algorithm that removes the sequential bottleneck in Monte Carlo Tree Search (MCTS). Unlike MuZero, which constructs its search tree step by step using a recurrent dynamics model, TransZero employs a transformer-based network to generate multiple latent future states simultaneously. Combined with the Mean-Variance Constrained (MVC) evaluator that eliminates dependence on inherently sequential visitation counts, our approach enables the parallel expansion of entire subtrees during planning. Experiments in MiniGrid and LunarLander show that TransZero achieves up to an eleven-fold speedup in wall-clock time compared to MuZero while maintaining sample efficiency. These results demonstrate that parallel tree construction can substantially accelerate model-based reinforcement learning, bringing real-time decision-making in complex environments closer to practice. The code is publicly available on GitHub.