In automated driving, predicting trajectories of surrounding vehicles supports reasoning about scene dynamics and enables safe planning for the ego vehicle. However, existing models handle predictions as an instantaneous task of forecasting future trajectories based on observed information. As time proceeds, the next prediction is made independently of the previous one, which means that the model cannot correct its errors during inference and will repeat them. To alleviate this problem and better leverage temporal data, we propose a novel retrospection technique. Through training on closed-loop rollouts the model learns to use aggregated feedback. Given new observations it reflects on previous predictions and analyzes its errors to improve the quality of subsequent predictions. Thus, the model can learn to correct systematic errors during inference. Comprehensive experiments on nuScenes and Argoverse demonstrate a considerable decrease in minimum Average Displacement Error of up to 31.9% compared to the state-of-the-art baseline without retrospection. We further showcase the robustness of our technique by demonstrating a better handling of out-of-distribution scenarios with undetected road-users.
Change detection (CD) in remote sensing imagery plays a crucial role in various applications such as urban planning, damage assessment, and resource management. While deep learning approaches have significantly advanced CD performance, current methods suffer from poor domain adaptability, requiring extensive labeled data for retraining when applied to new scenarios. This limitation severely restricts their practical applications across different datasets. In this work, we propose DAM-Net: a Domain Adaptation Network with Micro-Labeled Fine-Tuning for CD. Our network introduces adversarial domain adaptation to CD for, utilizing a specially designed segmentation-discriminator and alternating training strategy to enable effective transfer between domains. Additionally, we propose a novel Micro-Labeled Fine-Tuning approach that strategically selects and labels a minimal amount of samples (less than 1%) to enhance domain adaptation. The network incorporates a Multi-Temporal Transformer for feature fusion and optimized backbone structure based on previous research. Experiments conducted on the LEVIR-CD and WHU-CD datasets demonstrate that DAM-Net significantly outperforms existing domain adaptation methods, achieving comparable performance to semi-supervised approaches that require 10% labeled data while using only 0.3% labeled samples. Our approach significantly advances cross-dataset CD applications and provides a new paradigm for efficient domain adaptation in remote sensing. The source code of DAM-Net will be made publicly available upon publication.
Recent advancements in dialogue policy planning have emphasized optimizing system agent policies to achieve predefined goals, focusing on strategy design, trajectory acquisition, and efficient training paradigms. However, these approaches often overlook the critical role of user characteristics, which are essential in real-world scenarios like conversational search and recommendation, where interactions must adapt to individual user traits such as personality, preferences, and goals. To address this gap, we first conduct a comprehensive study utilizing task-specific user personas to systematically assess dialogue policy planning under diverse user behaviors. By leveraging realistic user profiles for different tasks, our study reveals significant limitations in existing approaches, highlighting the need for user-tailored dialogue policy planning. Building on this foundation, we present the User-Tailored Dialogue Policy Planning (UDP) framework, which incorporates an Intrinsic User World Model to model user traits and feedback. UDP operates in three stages: (1) User Persona Portraying, using a diffusion model to dynamically infer user profiles; (2) User Feedback Anticipating, leveraging a Brownian Bridge-inspired anticipator to predict user reactions; and (3) User-Tailored Policy Planning, integrating these insights to optimize response strategies. To ensure robust performance, we further propose an active learning approach that prioritizes challenging user personas during training. Comprehensive experiments on benchmarks, including collaborative and non-collaborative settings, demonstrate the effectiveness of UDP in learning user-specific dialogue strategies. Results validate the protocol's utility and highlight UDP's robustness, adaptability, and potential to advance user-centric dialogue systems.
This paper addresses the issue of motion planning in dynamic environments by extending the concept of Velocity Obstacle and Nonlinear Velocity Obstacle to Acceleration Obstacle AO and Nonlinear Acceleration Obstacle NAO. Similarly to VO and NLVO, the AO and NAO represent the set of colliding constant accelerations of the maneuvering robot with obstacles moving along linear and nonlinear trajectories, respectively. Contrary to prior works, we derive analytically the exact boundaries of AO and NAO. To enhance an intuitive understanding of these representations, we first derive the AO in several steps: first extending the VO to the Basic Acceleration Obstacle BAO that consists of the set of constant accelerations of the robot that would collide with an obstacle moving at constant accelerations, while assuming zero initial velocities of the robot and obstacle. This is then extended to the AO while assuming arbitrary initial velocities of the robot and obstacle. And finally, we derive the NAO that in addition to the prior assumptions, accounts for obstacles moving along arbitrary trajectories. The introduction of NAO allows the generation of safe avoidance maneuvers that directly account for the robot's second-order dynamics, with acceleration as its control input. The AO and NAO are demonstrated in several examples of selecting avoidance maneuvers in challenging road traffic. It is shown that the use of NAO drastically reduces the adjustment rate of the maneuvering robot's acceleration while moving in complex road traffic scenarios. The presented approach enables reactive and efficient navigation for multiple robots, with potential application for autonomous vehicles operating in complex dynamic environments.
This paper presents an appendix to the original NeBula autonomy solution developed by the TEAM CoSTAR (Collaborative SubTerranean Autonomous Robots), participating in the DARPA Subterranean Challenge. Specifically, this paper presents extensions to NeBula's hardware, software, and algorithmic components that focus on increasing the range and scale of the exploration environment. From the algorithmic perspective, we discuss the following extensions to the original NeBula framework: (i) large-scale geometric and semantic environment mapping; (ii) an adaptive positioning system; (iii) probabilistic traversability analysis and local planning; (iv) large-scale POMDP-based global motion planning and exploration behavior; (v) large-scale networking and decentralized reasoning; (vi) communication-aware mission planning; and (vii) multi-modal ground-aerial exploration solutions. We demonstrate the application and deployment of the presented systems and solutions in various large-scale underground environments, including limestone mine exploration scenarios as well as deployment in the DARPA Subterranean challenge.
The ability to update a path plan is a required capability for autonomous mobile robots navigating through uncertain environments. This paper proposes a re-planning strategy using a multilayer planning and control framework for cases where the robot's environment is partially known. A medial axis graph-based planner defines a global path plan based on known obstacles where each edge in the graph corresponds to a unique corridor. A mixed-integer model predictive control (MPC) method detects if a terminal constraint derived from the global plan is infeasible, subject to a non-convex description of the local environment. Infeasibility detection is used to trigger efficient global re-planning via medial axis graph edge deletion. The proposed re-planning strategy is demonstrated experimentally.
In a conventional taxi booking system, all taxi operations are mostly done by a decision made by drivers which is hard to implement in unmanned vehicles. To address this challenge, we introduce a taxi booking system which assists autonomous vehicles to pick up customers. The system can allocate an autonomous vehicle (AV) as well as plan service trips for a customer request. We use our own AV to serve a customer who uses a mobile application to make his taxi request. Apart from customer and AV, we build a server to monitor customers and AVs. It also supports inter-communication between a customer and an AV once AV decided to pick up a customer.
Learning to perform manipulation tasks from human videos is a promising approach for teaching robots. However, many manipulation tasks require changing control parameters during task execution, such as force, which visual data alone cannot capture. In this work, we leverage sensing devices such as armbands that measure human muscle activities and microphones that record sound, to capture the details in the human manipulation process, and enable robots to extract task plans and control parameters to perform the same task. To achieve this, we introduce Chain-of-Modality (CoM), a prompting strategy that enables Vision Language Models to reason about multimodal human demonstration data -- videos coupled with muscle or audio signals. By progressively integrating information from each modality, CoM refines a task plan and generates detailed control parameters, enabling robots to perform manipulation tasks based on a single multimodal human video prompt. Our experiments show that CoM delivers a threefold improvement in accuracy for extracting task plans and control parameters compared to baselines, with strong generalization to new task setups and objects in real-world robot experiments. Videos and code are available at https://chain-of-modality.github.io
To improve ATR identification of ships at sea requires an advanced ISAR processor - one that not only provides focused images but can also determine the pose of the ship. This tells us whether the image shows a profile (vertical plane) view, a plan (horizontal plane) view or some view in between. If the processor can provide this information, then the ATR processor can try to match the images with known vertical or horizontal features of ships and, in conjunction with estimated ship length, narrow the set of possible identifications. This paper extends the work of Melendez and Bennett [M-B, Ref. 1] by combining a focus algorithm with a method that models the angles of the ship relative to the radar. In M-B the algorithm was limited to a single angle and the plane of rotation was not determined. This assumption may be fine for a short time image where there is limited data available to determine the pose. However, the present paper models the ship rotation with two angles - aspect angle, representing rotation in the horizontal plane, and tilt angle, representing variations in the effective grazing angle to the ship.
This paper studies the synthesis of a joint control and active perception policy for a stochastic system modeled as a partially observable Markov decision process (POMDP), subject to temporal logic specifications. The POMDP actions influence both system dynamics (control) and the emission function (perception). Beyond task completion, the planner seeks to maximize information gain about certain temporal events (the secret) through coordinated perception and control. To enable active information acquisition, we introduce minimizing the Shannon conditional entropy of the secret as a planning objective, alongside maximizing the probability of satisfying the temporal logic formula within a finite horizon. Using a variant of observable operators in hidden Markov models (HMMs) and POMDPs, we establish key properties of the conditional entropy gradient with respect to policy parameters. These properties facilitate efficient policy gradient computation. We validate our approach through graph-based examples, inspired by common security applications with UAV surveillance.
Suicide is a critical global public health issue, with millions experiencing suicidal ideation (SI) each year. Online spaces enable individuals to express SI and seek peer support. While prior research has revealed the potential of detecting SI using machine learning and natural language analysis, a key limitation is the lack of a theoretical framework to understand the underlying factors affecting high-risk suicidal intent. To bridge this gap, we adopted the Interpersonal Theory of Suicide (IPTS) as an analytic lens to analyze 59,607 posts from Reddit's r/SuicideWatch, categorizing them into SI dimensions (Loneliness, Lack of Reciprocal Love, Self Hate, and Liability) and risk factors (Thwarted Belongingness, Perceived Burdensomeness, and Acquired Capability of Suicide). We found that high-risk SI posts express planning and attempts, methods and tools, and weaknesses and pain. In addition, we also examined the language of supportive responses through psycholinguistic and content analyses to find that individuals respond differently to different stages of Suicidal Ideation (SI) posts. Finally, we explored the role of AI chatbots in providing effective supportive responses to suicidal ideation posts. We found that although AI improved structural coherence, expert evaluations highlight persistent shortcomings in providing dynamic, personalized, and deeply empathetic support. These findings underscore the need for careful reflection and deeper understanding in both the development and consideration of AI-driven interventions for effective mental health support.
This paper investigates backdoor attack planning in stochastic control systems modeled as Markov Decision Processes (MDPs). In a backdoor attack, the adversary provides a control policy that behaves well in the original MDP to pass the testing phase. However, when such a policy is deployed with a trigger policy, which perturbs the system dynamics at runtime, it optimizes the attacker's objective instead. To solve jointly the control policy and its trigger, we formulate the attack planning problem as a constrained optimal planning problem in an MDP with augmented state space, with the objective to maximize the attacker's total rewards in the system with an activated trigger, subject to the constraint that the control policy is near optimal in the original MDP. We then introduce a gradient-based optimization method to solve the optimal backdoor attack policy as a pair of coordinated control and trigger policies. Experimental results from a case study validate the effectiveness of our approach in achieving stealthy backdoor attacks.
A robot navigating an outdoor environment with no prior knowledge of the space must rely on its local sensing to perceive its surroundings and plan. This can come in the form of a local metric map or local policy with some fixed horizon. Beyond that, there is a fog of unknown space marked with some fixed cost. A limited planning horizon can often result in myopic decisions leading the robot off course or worse, into very difficult terrain. Ideally, we would like the robot to have full knowledge that can be orders of magnitude larger than a local cost map. In practice, this is intractable due to sparse sensing information and often computationally expensive. In this work, we make a key observation that long-range navigation only necessitates identifying good frontier directions for planning instead of full map knowledge. To this end, we propose Long Range Navigator (LRN), that learns an intermediate affordance representation mapping high-dimensional camera images to `affordable' frontiers for planning, and then optimizing for maximum alignment with the desired goal. LRN notably is trained entirely on unlabeled ego-centric videos making it easy to scale and adapt to new platforms. Through extensive off-road experiments on Spot and a Big Vehicle, we find that augmenting existing navigation stacks with LRN reduces human interventions at test-time and leads to faster decision making indicating the relevance of LRN. https://personalrobotics.github.io/lrn
The "avoid - shift - improve" framework and the European Clean Vehicles Directive set the path for improving the efficiency and ultimately decarbonizing the transport sector. While electric buses have already been adopted in several cities, regional bus lines may pose additional challenges due to the potentially longer distances they have to travel. In this work, we model and solve the electric bus scheduling problem, lexicographically minimizing the size of the bus fleet, the number of charging stops, and the total energy consumed, to provide decision support for bus operators planning to replace their diesel-powered fleet with zero emission vehicles. We propose a graph representation which allows partial charging without explicitly relying on time variables and derive 3-index and 2-index mixed-integer linear programming formulations for the multi-depot electric vehicle scheduling problem. While the 3-index model can be solved by an off-the-shelf solver directly, the 2-index model relies on an exponential number of constraints to ensure the correct depot pairing. These are separated in a cutting plane fashion. We propose a set of instances with up to 80 service trips to compare the two approaches, showing that, with a small number of depots, the compact 3-index model performs very well. However, as the number of depots increases the developed branch-and-cut algorithm proves to be of value. These findings not only offer algorithmic insights but the developed approaches also provide actionable guidance for transit agencies and operators, allowing to quantify trade-offs between fleet size, energy efficiency, and infrastructure needs under realistic operational conditions.
Achieving versatile and explosive motion with robustness against dynamic uncertainties is a challenging task. Introducing parallel compliance in quadrupedal design is deemed to enhance locomotion performance, which, however, makes the control task even harder. This work aims to address this challenge by proposing a general template model and establishing an efficient motion planning and control pipeline. To start, we propose a reduced-order template model-the dual-legged actuated spring-loaded inverted pendulum with trunk rotation-which explicitly models parallel compliance by decoupling spring effects from active motor actuation. With this template model, versatile acrobatic motions, such as pronking, froggy jumping, and hop-turn, are generated by a dual-layer trajectory optimization, where the singularity-free body rotation representation is taken into consideration. Integrated with a linear singularity-free tracking controller, enhanced quadrupedal locomotion is achieved. Comparisons with the existing template model reveal the improved accuracy and generalization of our model. Hardware experiments with a rigid quadruped and a newly designed compliant quadruped demonstrate that i) the template model enables generating versatile dynamic motion; ii) parallel elasticity enhances explosive motion. For example, the maximal pronking distance, hop-turn yaw angle, and froggy jumping distance increase at least by 25%, 15% and 25%, respectively; iii) parallel elasticity improves the robustness against dynamic uncertainties, including modelling errors and external disturbances. For example, the allowable support surface height variation increases by 100% for robust froggy jumping.
End-to-end autonomous driving aims to produce planning trajectories from raw sensors directly. Currently, most approaches integrate perception, prediction, and planning modules into a fully differentiable network, promising great scalability. However, these methods typically rely on deterministic modeling of online maps in the perception module for guiding or constraining vehicle planning, which may incorporate erroneous perception information and further compromise planning safety. To address this issue, we delve into the importance of online map uncertainty for enhancing autonomous driving safety and propose a novel paradigm named UncAD. Specifically, UncAD first estimates the uncertainty of the online map in the perception module. It then leverages the uncertainty to guide motion prediction and planning modules to produce multi-modal trajectories. Finally, to achieve safer autonomous driving, UncAD proposes an uncertainty-collision-aware planning selection strategy according to the online map uncertainty to evaluate and select the best trajectory. In this study, we incorporate UncAD into various state-of-the-art (SOTA) end-to-end methods. Experiments on the nuScenes dataset show that integrating UncAD, with only a 1.9% increase in parameters, can reduce collision rates by up to 26% and drivable area conflict rate by up to 42%. Codes, pre-trained models, and demo videos can be accessed at https://github.com/pengxuanyang/UncAD.
Future planned lepton colliders, both in the circular and linear configurations, can effectively work as virtual and quasi-real photon-photon colliders and are expected to stimulate an intense physics program in the next few years. In this paper, we suggest to consider photon-photon scattering as a useful source of information on transverse momentum dependent fragmentation functions (TMD FFs), complementing semi-inclusive deep inelastic scattering and $e^+e^-$ annihilation processes, which provide most of the present phenomenological information on TMD FFs. As a first illustrative example, we study two-hadron azimuthal asymmetries around the jet thrust-axis in the process $\ell^+\ell^-\to\gamma^* \gamma\to q\bar q\to h_1 h_2 + X$, in which in a circular lepton collider one tagged, deeply-virtual photon scatters off an untagged quasi-real photon, both originating from the initial lepton beams, producing inclusively an almost back-to-back light-hadron pair with large transverse momentum, in the $\gamma^*\gamma$ center of mass frame. Similar processes, in a more complicated environment due to the presence of initial hadronic states, can also be studied in ultraperipheral collisions at the LHC and the planned future hadron colliders.
The Open Loop Layout Problem (OLLP) seeks to position rectangular cells of varying dimensions on a plane without overlap, minimizing transportation costs computed as the flow-weighted sum of pairwise distances between cells. A key challenge in OLLP is to compute accurate inter-cell distances along feasible paths that avoid rectangle intersections. Existing approaches approximate inter-cell distances using centroids, a simplification that can ignore physical constraints, resulting in infeasible layouts or underestimated distances. This study proposes the first mathematical model that incorporates exact door-to-door distances and feasible paths under the Euclidean metric, with cell doors acting as pickup and delivery points. Feasible paths between doors must either follow rectangle edges as corridors or take direct, unobstructed routes. To address the NP-hardness of the problem, we present a metaheuristic framework with a novel encoding scheme that embeds exact path calculations. Experiments on standard benchmark instances confirm that our approach consistently outperforms existing methods, delivering superior solution quality and practical applicability.
In this work, we present a novel approach to bias the driving style of an artificial race driver (ARD) for online time-optimal trajectory planning. Our method leverages a nonlinear model predictive control (MPC) framework that combines time minimization with exit speed maximization at the end of the planning horizon. We introduce a new MPC terminal cost formulation based on the trajectory planned in the previous MPC step, enabling ARD to adapt its driving style from early to late apex maneuvers in real-time. Our approach is computationally efficient, allowing for low replan times and long planning horizons. We validate our method through simulations, comparing the results against offline minimum-lap-time (MLT) optimal control and online minimum-time MPC solutions. The results demonstrate that our new terminal cost enables ARD to bias its driving style, and achieve online lap times close to the MLT solution and faster than the minimum-time MPC solution. Our approach paves the way for a better understanding of the reasons behind human drivers' choice of early or late apex maneuvers.
B* is a novel optimization framework that addresses a critical challenge in fixed-base manipulator robotics: optimal base placement. Current methods rely on pre-computed kinematics databases generated through sampling to search for solutions. However, they face an inherent trade-off between solution optimality and computational efficiency when determining sampling resolution. To address these limitations, B* unifies multiple objectives without database dependence. The framework employs a two-layer hierarchical approach. The outer layer systematically manages terminal constraints through progressive tightening, particularly for base mobility, enabling feasible initialization and broad solution exploration. The inner layer addresses non-convexities in each outer-layer subproblem through sequential local linearization, converting the original problem into tractable sequential linear programming (SLP). Testing across multiple robot platforms demonstrates B*'s effectiveness. The framework achieves solution optimality five orders of magnitude better than sampling-based approaches while maintaining perfect success rates and reduced computational overhead. Operating directly in configuration space, B* enables simultaneous path planning with customizable optimization criteria. B* serves as a crucial initialization tool that bridges the gap between theoretical motion planning and practical deployment, where feasible trajectory existence is fundamental.
This paper proposes a genetic algorithm-based kinodynamic planning algorithm (GAKD) for car-like vehicles navigating uneven terrains modeled as triangular meshes. The algorithm's distinct feature is trajectory optimization over a fixed-length receding horizon using a genetic algorithm with heuristic-based mutation, ensuring the vehicle's controls remain within its valid operational range. By addressing challenges posed by uneven terrain meshes, such as changing face normals, GAKD offers a practical solution for path planning in complex environments. Comparative evaluations against Model Predictive Path Integral (MPPI) and log-MPPI methods show that GAKD achieves up to 20 percent improvement in traversability cost while maintaining comparable path length. These results demonstrate GAKD's potential in improving vehicle navigation on challenging terrains.
End-to-end autonomous driving has made impressive progress in recent years. Former end-to-end autonomous driving approaches often decouple planning and motion tasks, treating them as separate modules. This separation overlooks the potential benefits that planning can gain from learning out-of-distribution data encountered in motion tasks. However, unifying these tasks poses significant challenges, such as constructing shared contextual representations and handling the unobservability of other vehicles' states. To address these challenges, we propose TTOG, a novel two-stage trajectory generation framework. In the first stage, a diverse set of trajectory candidates is generated, while the second stage focuses on refining these candidates through vehicle state information. To mitigate the issue of unavailable surrounding vehicle states, TTOG employs a self-vehicle data-trained state estimator, subsequently extended to other vehicles. Furthermore, we introduce ECSA (equivariant context-sharing scene adapter) to enhance the generalization of scene representations across different agents. Experimental results demonstrate that TTOG achieves state-of-the-art performance across both planning and motion tasks. Notably, on the challenging open-loop nuScenes dataset, TTOG reduces the L2 distance by 36.06\%. Furthermore, on the closed-loop Bench2Drive dataset, our approach achieves a 22\% improvement in the driving score (DS), significantly outperforming existing baselines.
KAGRA is a kilometer-scale cryogenic gravitational-wave (GW) detector in Japan. It joined the 4th joint observing run (O4) in May 2023 in collaboration with the Laser Interferometer GW Observatory (LIGO) in the USA, and Virgo in Italy. After one month of observations, KAGRA entered a break period to enhance its sensitivity to GWs, and it is planned to rejoin O4 before its scheduled end in October 2025. To accurately recover the information encoded in the GW signals, it is essential to properly calibrate the observed signals. We employ a photon calibration (Pcal) system as a reference signal injector to calibrate the output signals obtained from the telescope. In ideal future conditions, the uncertainty in Pcal could dominate the uncertainty in the observed data. In this paper, we present the methods used to estimate the uncertainty in the Pcal systems employed during KAGRA O4 and report an estimated system uncertainty of 0.79%, which is three times lower than the uncertainty achieved in the previous 3rd joint observing run (O3) in 2020. Additionally, we investigate the uncertainty in the Pcal laser power sensors, which had the highest impact on the Pcal uncertainty, and estimate the beam positions on the KAGRA main mirror, which had the second highest impact. The Pcal systems in KAGRA are the first fully functional calibration systems for a cryogenic GW telescope. To avoid interference with the KAGRA cryogenic systems, the Pcal systems incorporate unique features regarding their placement and the use of telephoto cameras, which can capture images of the mirror surface at almost normal incidence. As future GW telescopes, such as the Einstein Telescope, are expected to adopt cryogenic techniques, the performance of the KAGRA Pcal systems can serve as a valuable reference.
Safe and efficient path planning in parking scenarios presents a significant challenge due to the presence of cluttered environments filled with static and dynamic obstacles. To address this, we propose a novel and computationally efficient planning strategy that seamlessly integrates the predictions of dynamic obstacles into the planning process, ensuring the generation of collision-free paths. Our approach builds upon the conventional Hybrid A star algorithm by introducing a time-indexed variant that explicitly accounts for the predictions of dynamic obstacles during node exploration in the graph, thus enabling dynamic obstacle avoidance. We integrate the time-indexed Hybrid A star algorithm within an online planning framework to compute local paths at each planning step, guided by an adaptively chosen intermediate goal. The proposed method is validated in diverse parking scenarios, including perpendicular, angled, and parallel parking. Through simulations, we showcase our approach's potential in greatly improving the efficiency and safety when compared to the state of the art spline-based planning method for parking situations.
Timely and accurate detection of hurricane debris is critical for effective disaster response and community resilience. While post-disaster aerial imagery is readily available, robust debris segmentation solutions applicable across multiple disaster regions remain limited. Developing a generalized solution is challenging due to varying environmental and imaging conditions that alter debris' visual signatures across different regions, further compounded by the scarcity of training data. This study addresses these challenges by fine-tuning pre-trained foundational vision models, achieving robust performance with a relatively small, high-quality dataset. Specifically, this work introduces an open-source dataset comprising approximately 1,200 manually annotated aerial RGB images from Hurricanes Ian, Ida, and Ike. To mitigate human biases and enhance data quality, labels from multiple annotators are strategically aggregated and visual prompt engineering is employed. The resulting fine-tuned model, named fCLIPSeg, achieves a Dice score of 0.70 on data from Hurricane Ida -- a disaster event entirely excluded during training -- with virtually no false positives in debris-free areas. This work presents the first event-agnostic debris segmentation model requiring only standard RGB imagery during deployment, making it well-suited for rapid, large-scale post-disaster impact assessments and recovery planning.
Generating natural and physically plausible character motion remains challenging, particularly for long-horizon control with diverse guidance signals. While prior work combines high-level diffusion-based motion planners with low-level physics controllers, these systems suffer from domain gaps that degrade motion quality and require task-specific fine-tuning. To tackle this problem, we introduce UniPhys, a diffusion-based behavior cloning framework that unifies motion planning and control into a single model. UniPhys enables flexible, expressive character motion conditioned on multi-modal inputs such as text, trajectories, and goals. To address accumulated prediction errors over long sequences, UniPhys is trained with the Diffusion Forcing paradigm, learning to denoise noisy motion histories and handle discrepancies introduced by the physics simulator. This design allows UniPhys to robustly generate physically plausible, long-horizon motions. Through guided sampling, UniPhys generalizes to a wide range of control signals, including unseen ones, without requiring task-specific fine-tuning. Experiments show that UniPhys outperforms prior methods in motion naturalness, generalization, and robustness across diverse control tasks.
Despite continuous advancements in cancer treatment, brain metastatic disease remains a significant complication of primary cancer and is associated with an unfavorable prognosis. One approach for improving diagnosis, management, and outcomes is to implement algorithms based on artificial intelligence for the automated segmentation of both pre- and post-treatment MRI brain images. Such algorithms rely on volumetric criteria for lesion identification and treatment response assessment, which are still not available in clinical practice. Therefore, it is critical to establish tools for rapid volumetric segmentations methods that can be translated to clinical practice and that are trained on high quality annotated data. The BraTS-METS 2025 Lighthouse Challenge aims to address this critical need by establishing inter-rater and intra-rater variability in dataset annotation by generating high quality annotated datasets from four individual instances of segmentation by neuroradiologists while being recorded on video (two instances doing "from scratch" and two instances after AI pre-segmentation). This high-quality annotated dataset will be used for testing phase in 2025 Lighthouse challenge and will be publicly released at the completion of the challenge. The 2025 Lighthouse challenge will also release the 2023 and 2024 segmented datasets that were annotated using an established pipeline of pre-segmentation, student annotation, two neuroradiologists checking, and one neuroradiologist finalizing the process. It builds upon its previous edition by including post-treatment cases in the dataset. Using these high-quality annotated datasets, the 2025 Lighthouse challenge plans to test benchmark algorithms for automated segmentation of pre-and post-treatment brain metastases (BM), trained on diverse and multi-institutional datasets of MRI images obtained from patients with brain metastases.
The Midwest, with its vast agricultural lands, is rapidly emerging as a key region for utility-scale solar expansion. However, traditional power planning has yet to integrate local economic impact directly into capacity expansion to guide optimal siting decisions. Moreover, existing economic assessments tend to emphasize local benefits while overlooking the opportunity costs of converting productive farmland for solar development. This study addresses these gaps by endogenously incorporating local economic metrics into a power system planning model to evaluate how economic impacts influence solar siting, accounting for the cost of lost agricultural output. We analyze all counties within the Great Lakes region, constructing localized supply and marginal benefit curves that are embedded within a multi-objective optimization framework aimed at minimizing system costs and maximizing community economic benefits. Our findings show that counties with larger economies and lower farmland productivity deliver the highest local economic benefit per megawatt (MW) of installed solar capacity. In Ohio, for example, large counties generate up to $34,500 per MW, driven in part by high property tax revenues, while smaller counties yield 31% less. Accounting for the opportunity cost of displaced agricultural output reduces local benefits by up to 16%, depending on farmland quality. A scenario prioritizing solar investment in counties with higher economic returns increases total economic benefits by $1 billion (or 11%) by 2040, with solar investment shifting away from Michigan and Wisconsin (down by 39%) toward Ohio and Indiana (up by 75%), with only a marginal increase of 0.5% in system-wide costs. These findings underscore the importance of integrating economic considerations into utility-scale solar planning to better align decarbonization goals with regional and local economic development.
We present the first results from the CAPERS survey, utilizing PRISM observations with the JWST/NIRSpec MSA in the PRIMER-UDS field. With just 14 % of the total planned data volume, we spectroscopically confirm two new bright galaxies ($M_{\rm UV}\sim -20.4$) at redshifts $z = 10.562\pm0.034$ and $z = 11.013\pm0.028$. We examine their physical properties, morphologies, and star formation histories, finding evidence for recent bursting star formation in at least one galaxy thanks to the detection of strong (EW$_0\sim70$ A) H$\gamma$ emission. Combining our findings with previous studies of similarly bright objects at high-$z$, we further assess the role of stochastic star formation processes in shaping early galaxy populations. Our analysis finds that the majority of bright ($M_{\rm UV}\lesssim -20$) spectroscopically-confirmed galaxies at $z>10$ were likely observed during a starburst episode, characterized by a median SFR$_{10}$/SFR$_{100}\sim2$, although with substantial scatter. Our work also finds tentative evidence that $z>10$ galaxies are more preferentially in a bursting phase than similarly bright $z\sim6$ galaxies. We finally discuss the prospects of deeper spectroscopic observations of a statistically significant number of bright galaxies to quantify the true impact of bursting star formation on the evolution of the bright end of the ultraviolet luminosity function at these early epochs.
As generative AI tools like ChatGPT become integral to everyday writing, critical questions arise about how to preserve writers' sense of agency and ownership when using these tools. Yet, a systematic understanding of how AI assistance affects different aspects of the writing process - and how this shapes writers' agency - remains underexplored. To address this gap, we conducted a systematic review of 109 HCI papers using the PRISMA approach. From this literature, we identify four overarching design strategies for AI writing support: structured guidance, guided exploration, active co-writing, and critical feedback - mapped across the four key cognitive processes in writing: planning, translating, reviewing, and monitoring. We complement this analysis with interviews of 15 writers across diverse domains. Our findings reveal that writers' desired levels of AI intervention vary across the writing process: content-focused writers (e.g., academics) prioritize ownership during planning, while form-focused writers (e.g., creatives) value control over translating and reviewing. Writers' preferences are also shaped by contextual goals, values, and notions of originality and authorship. By examining when ownership matters, what writers want to own, and how AI interactions shape agency, we surface both alignment and gaps between research and user needs. Our findings offer actionable design guidance for developing human-centered writing tools for co-writing with AI, on human terms.