Bird's-eye view (BEV) perception has gained significant attention because it provides a unified representation to fuse multiple view images and enables a wide range of down-stream autonomous driving tasks, such as forecasting and planning. Recent state-of-the-art models utilize projection-based methods which formulate BEV perception as query learning to bypass explicit depth estimation. While we observe promising advancements in this paradigm, they still fall short of real-world applications because of the lack of uncertainty modeling and expensive computational requirement. In this work, we introduce GaussianLSS, a novel uncertainty-aware BEV perception framework that revisits unprojection-based methods, specifically the Lift-Splat-Shoot (LSS) paradigm, and enhances them with depth un-certainty modeling. GaussianLSS represents spatial dispersion by learning a soft depth mean and computing the variance of the depth distribution, which implicitly captures object extents. We then transform the depth distribution into 3D Gaussians and rasterize them to construct uncertainty-aware BEV features. We evaluate GaussianLSS on the nuScenes dataset, achieving state-of-the-art performance compared to unprojection-based methods. In particular, it provides significant advantages in speed, running 2.5x faster, and in memory efficiency, using 0.3x less memory compared to projection-based methods, while achieving competitive performance with only a 0.4% IoU difference.
Understanding the complex myocardial architecture is critical for diagnosing and treating heart disease. However, existing methods often struggle to accurately capture this intricate structure from Diffusion Tensor Imaging (DTI) data, particularly due to the lack of ground truth labels and the ambiguous, intertwined nature of fiber trajectories. We present a novel deep learning framework for unsupervised clustering of myocardial fibers, providing a data-driven approach to identifying distinct fiber bundles. We uniquely combine a Bidirectional Long Short-Term Memory network to capture local sequential information along fibers, with a Transformer autoencoder to learn global shape features, with pointwise incorporation of essential anatomical context. Clustering these representations using a density-based algorithm identifies 33 to 62 robust clusters, successfully capturing the subtle distinctions in fiber trajectories with varying levels of granularity. Our framework offers a new, flexible, and quantitative way to analyze myocardial structure, achieving a level of delineation that, to our knowledge, has not been previously achieved, with potential applications in improving surgical planning, characterizing disease-related remodeling, and ultimately, advancing personalized cardiac care.
End-to-end autonomous driving has achieved remarkable progress by integrating perception, prediction, and planning into a fully differentiable framework. Yet, to fully realize its potential, an effective online trajectory evaluation is indispensable to ensure safety. By forecasting the future outcomes of a given trajectory, trajectory evaluation becomes much more effective. This goal can be achieved by employing a world model to capture environmental dynamics and predict future states. Therefore, we propose an end-to-end driving framework WoTE, which leverages a BEV World model to predict future BEV states for Trajectory Evaluation. The proposed BEV world model is latency-efficient compared to image-level world models and can be seamlessly supervised using off-the-shelf BEV-space traffic simulators. We validate our framework on both the NAVSIM benchmark and the closed-loop Bench2Drive benchmark based on the CARLA simulator, achieving state-of-the-art performance. Code is released at https://github.com/liyingyanUCAS/WoTE.
This paper presents field-tested use cases from Search and Rescue (SAR) missions, highlighting the co-design of mobile robots and communication systems to support Edge-Cloud architectures based on 5G Standalone (SA). The main goal is to contribute to the effective cooperation of multiple robots and first responders. Our field experience includes the development of Hybrid Wireless Sensor Networks (H-WSNs) for risk and victim detection, smartphones integrated into the Robot Operating System (ROS) as Edge devices for mission requests and path planning, real-time Simultaneous Localization and Mapping (SLAM) via Multi-Access Edge Computing (MEC), and implementation of Uncrewed Ground Vehicles (UGVs) for victim evacuation in different navigation modes. These experiments, conducted in collaboration with actual first responders, underscore the need for intelligent network resource management, balancing low-latency and high-bandwidth demands. Network slicing is key to ensuring critical emergency services are performed despite challenging communication conditions. The paper identifies architectural needs, lessons learned, and challenges to be addressed by 6G technologies to enhance emergency response capabilities.
The TELOS Collaboration is committed to producing and analysing lattice data reproducibly, and sharing its research openly. In this document, we set out the ways that we make this happen, where there is scope for improvement, and how we plan to achieve this. This is intended to work both as a statement of policy, and a guide to practice for those beginning to work with us. Some details and recommendations are specific to the context in which the Collaboration works (such as references to requirements imposed by funders in the United Kingdom); however, most recommendations may serve as a template for other collaborations looking to make their own work reproducible. Full tutorials on every aspect of reproducibility are beyond the scope of this document, but we refer to other resources for further information.
Trajectory prediction of other vehicles is crucial for autonomous vehicles, with applications from missile guidance to UAV collision avoidance. Typically, target trajectories are assumed deterministic, but real-world aerial vehicles exhibit stochastic behavior, such as evasive maneuvers or gliders circling in thermals. This paper uses Conditional Normalizing Flows, an unsupervised Machine Learning technique, to learn and predict the stochastic behavior of targets of guided missiles using trajectory data. The trained model predicts the distribution of future target positions based on initial conditions and parameters of the dynamics. Samples from this distribution are clustered using a time series k-means algorithm to generate representative trajectories, termed virtual targets. The method is fast and target-agnostic, requiring only training data in the form of target trajectories. Thus, it serves as a drop-in replacement for deterministic trajectory predictions in guidance laws and path planning. Simulated scenarios demonstrate the approach's effectiveness for aerial vehicles with random maneuvers, bridging the gap between deterministic predictions and stochastic reality, advancing guidance and control algorithms for autonomous vehicles.
In this document we summarize the output of the US community planning exercises for particle physics that were performed between 2020 and 2023 and comment upon progress made since then towards our common scientific goals. This document leans heavily on the formal report of the Particle Physics Project Prioritization Panel and other recent US planning documents, often quoting them verbatim to retain the community consensus.
Writing proposals and job applications is arguably one of the most important tasks in the career of a scientist. The proposed ideas must be scientifically compelling, but how a proposal is planned, written, and presented can make an enormous difference. This Perspective is the third in a series aimed at training the writing skills of professional astronomers. In the first two papers we concentrated on the writing of papers, here we concentrate on how proposals and job applications can be optimally written and presented. We discuss how to select where to propose or apply, how to optimise your writing, and add notes on the potential use of artificial intelligence tools. This guide is aimed primarily at more junior researchers, but we hope that our observations and suggestions may also be helpful for more experienced applicants, as well as for reviewers and funding agencies.
The SiTian Project represents a groundbreaking initiative in astronomy, aiming to deploy a global network of telescopes, each with a 1-meter aperture, for comprehensive time-domain sky surveys. The network's innovative architecture features multiple observational nodes, each comprising three strategically aligned telescopes equipped with filters. This design enables three-color (g, r, i) channel imaging within each node, facilitating precise and coordinated observations. As a pathfinder to the full-scale project, the Mini-SiTian Project serves as the scientific and technological validation platform, utilizing three 30-centimeter aperture telescopes to validate the methodologies and technologies planned for the broader SiTian network. This paper focuses on the development and implementation of the Master Control System (MCS),and the central command hub for the Mini-SiTian array. The MCS is designed to facilitate seamless communication with the SiTian Brain, the project's central processing and decision-making unit, while ensuring accurate task allocation, real-time status monitoring, and optimized observational workflows. The system adopts a robust architecture that separates front-end and back-end functionalities.A key innovation of the MCS is its ability to dynamically adjust observation plans in response to transient source alerts, enabling rapid and coordinated scans of target sky regions...(abridged)
This paper provides a comprehensive introduction to the Mini-SiTian Real-Time Image Processing pipeline (STRIP) and evaluates its operational performance. The STRIP pipeline is specifically designed for real-time alert triggering and light curve generation for transient sources. By applying the STRIP pipeline to both simulated and real observational data of the Mini-SiTian survey, it successfully identified various types of variable sources, including stellar flares, supernovae, variable stars, and asteroids, while meeting requirements of reduction speed within 5 minutes. For the real observational dataset, the pipeline detected 1 flare event, 127 variable stars, and 14 asteroids from three monitored sky regions. Additionally, two datasets were generated: one, a real-bogus training dataset comprising 218,818 training samples, and the other, a variable star light curve dataset with 421 instances. These datasets will be used to train machine learning algorithms, which are planned for future integration into STRIP.
In this paper we carry out a computational study of a novel microscopic follow-the-leader model for traffic flow on road networks. We assume that each driver has its own origin and destination, and wants to complete its journey in minimal time. We also assume that each driver is able to take rational decisions at junctions and can change route while moving depending on the traffic conditions. The main novelty of the model is that vehicles can automatically and anonymously share information about their position, destination, and planned path when they are close to each other within a certain distance. The pieces of information acquired during the journey are used to optimize the route itself. In the limit case of a infinite communication range, we recover the classical Reactive User Equilibrium and Dynamic User Equilibrium.
Motion planning in uncertain environments like complex urban areas is a key challenge for autonomous vehicles (AVs). The aim of our research is to investigate how AVs can navigate crowded, unpredictable scenarios with multiple pedestrians while maintaining a safe and efficient vehicle behavior. So far, most research has concentrated on static or deterministic traffic participant behavior. This paper introduces a novel algorithm for motion planning in crowded spaces by combining social force principles for simulating realistic pedestrian behavior with a risk-aware motion planner. We evaluate this new algorithm in a 2D simulation environment to rigorously assess AV-pedestrian interactions, demonstrating that our algorithm enables safe, efficient, and adaptive motion planning, particularly in highly crowded urban environments - a first in achieving this level of performance. This study has not taken into consideration real-time constraints and has been shown only in simulation so far. Further studies are needed to investigate the novel algorithm in a complete software stack for AVs on real cars to investigate the entire perception, planning and control pipeline in crowded scenarios. We release the code developed in this research as an open-source resource for further studies and development. It can be accessed at the following link: https://github.com/TUM-AVS/PedestrianAwareMotionPlanning
Autonomous vehicles (AVs) must navigate dynamic urban environments where occlusions and perception limitations introduce significant uncertainties. This research builds upon and extends existing approaches in risk-aware motion planning and occlusion tracking to address these challenges. While prior studies have developed individual methods for occlusion tracking and risk assessment, a comprehensive method integrating these techniques has not been fully explored. We, therefore, enhance a phantom agent-centric model by incorporating sequential reasoning to track occluded areas and predict potential hazards. Our model enables realistic scenario representation and context-aware risk evaluation by modeling diverse phantom agents, each with distinct behavior profiles. Simulations demonstrate that the proposed approach improves situational awareness and balances proactive safety with efficient traffic flow. While these results underline the potential of our method, validation in real-world scenarios is necessary to confirm its feasibility and generalizability. By utilizing and advancing established methodologies, this work contributes to safer and more reliable AV planning in complex urban environments. To support further research, our method is available as open-source software at: https://github.com/TUM-AVS/OcclusionAwareMotionPlanning
Although battery technology has advanced tremendously over the past decade, it continues to be a bottleneck for the mass adoption of electric aircraft in long-haul cargo and passenger delivery. The onboard energy is expected to be utilized in an efficient manner. Energy concumption modeling research offers increasingly accurate mathematical models, but there is scant research pertaining to real-time energy optimization at an operational level. Additionally, few publications include landing and take-off energy demands in their governing models. This work presents fundamental energy equations and proposes a proportional-integral-derivative (PID) controller. The proposed method demonstrates a unique approach to an energy consumption model that tracks real-time energy optimization along a predetermined path. The proposed PID controller was tested in simulation, and the results show its effectiveness and accuracy in driving the actual airspeed to converge to the optimal velocity without knowing the system dynamics. We also propose a model-predictive method to minimize the energy usage in landing and take-off by optimizing the flight trajectory.
This document is submitted as input to the European Strategy for Particle Physics Update (ESPPU). The U.S.-based Electron-Ion Collider (EIC) aims at understanding how the complex dynamics of confined quarks and gluons makes up nucleons, nuclei and all visible matter, and determines their macroscopic properties. In April 2024, the EIC project received approval for critical-decision 3A (CD-3A) allowing for Long-Lead Procurement, bringing its realization another step closer. The ePIC Collaboration was established in July 2022 around the realization of a general purpose detector at the EIC. The EIC is based in U.S.A. but is characterized as a genuine international project. In fact, a large group of European scientists is already involved in the EIC community: currently, about a quarter of the EIC User Group (consisting of over 1500 scientists) and 29% of the ePIC Collaboration (consisting of $\sim$1000 members) is based in Europe. This European involvement is not only an important driver of the EIC, but can also be beneficial to a number of related ongoing and planned particle physics experiments at CERN. In this document, the connections between the scientific questions addressed at CERN and at the EIC are outlined. The aim is to highlight how the many synergies between the CERN Particle Physics research and the EIC project will foster progress at the forefront of collider physics.
In this article, a new approach for 3D motion planning, applicable to aerial vehicles, is proposed to connect an initial and final configuration subject to pitch rate and yaw rate constraints. The motion planning problem for a curvature-constrained vehicle over the surface of a sphere is identified as an intermediary problem to be solved, and it is the focus of this paper. In this article, the optimal path candidates for a vehicle with a minimum turning radius $r$ moving over a unit sphere are derived using a phase portrait approach. We show that the optimal path is $CGC$ or concatenations of $C$ segments through simple proofs, where $C = L, R$ denotes a turn of radius $r$ and $G$ denotes a great circular arc. We generalize the previous result of optimal paths being $CGC$ and $CCC$ paths for $r \in \left(0, \frac{1}{2} \right]\bigcup\{\frac{1}{\sqrt{2}}\}$ to $r \leq \frac{\sqrt{3}}{2}$ to account for vehicles with a larger $r$. We show that the optimal path is $CGC, CCCC,$ for $r \leq \frac{1}{\sqrt{2}},$ and $CGC, CC_\pi C, CCCCC$ for $r \leq \frac{\sqrt{3}}{2}.$ Additionally, we analytically construct all candidate paths and provide the code in a publicly accessible repository.
This article addresses time-optimal path planning for a vehicle capable of moving both forward and backward on a unit sphere with a unit maximum speed, and constrained by a maximum absolute turning rate $U_{max}$. The proposed formulation can be utilized for optimal attitude control of underactuated satellites, optimal motion planning for spherical rolling robots, and optimal path planning for mobile robots on spherical surfaces or uneven terrains. By utilizing Pontryagin's Maximum Principle and analyzing phase portraits, it is shown that for $U_{max}\geq1$, the optimal path connecting a given initial configuration to a desired terminal configuration falls within a sufficient list of 23 path types, each comprising at most 6 segments. These segments belong to the set $\{C,G,T\}$, where $C$ represents a tight turn with radius $r=\frac{1}{\sqrt{1+U_{max}^2}}$, $G$ represents a great circular arc, and $T$ represents a turn-in-place motion. Closed-form expressions for the angles of each path in the sufficient list are derived. The source code for solving the time-optimal path problem and visualization is publicly available at https://github.com/sixuli97/Optimal-Spherical-Convexified-Reeds-Shepp-Paths.
In this article, a novel combined aerial cooperative tethered carrying and path planning framework is introduced with a special focus on applications in confined environments. The proposed work is aiming towards solving the path planning problem for the formation of two quadrotors, while having a rope hanging below them and passing through or around obstacles. A novel composition mechanism is proposed, which simplifies the degrees of freedom of the combined aerial system and expresses the corresponding states in a compact form. Given the state of the composition, a dynamic body is generated that encapsulates the quadrotors-rope system and makes the procedure of collision checking between the system and the environment more efficient. By utilizing the above two abstractions, an RRT path planning scheme is implemented and a collision-free path for the formation is generated. This path is decomposed back to the quadrotors' desired positions that are fed to the Model Predictive Controller (MPC) for each one. The efficiency of the proposed framework is experimentally evaluated.
Computer use agents automate digital tasks by directly interacting with graphical user interfaces (GUIs) on computers and mobile devices, offering significant potential to enhance human productivity by completing an open-ended space of user queries. However, current agents face significant challenges: imprecise grounding of GUI elements, difficulties with long-horizon task planning, and performance bottlenecks from relying on single generalist models for diverse cognitive tasks. To this end, we introduce Agent S2, a novel compositional framework that delegates cognitive responsibilities across various generalist and specialist models. We propose a novel Mixture-of-Grounding technique to achieve precise GUI localization and introduce Proactive Hierarchical Planning, dynamically refining action plans at multiple temporal scales in response to evolving observations. Evaluations demonstrate that Agent S2 establishes new state-of-the-art (SOTA) performance on three prominent computer use benchmarks. Specifically, Agent S2 achieves 18.9% and 32.7% relative improvements over leading baseline agents such as Claude Computer Use and UI-TARS on the OSWorld 15-step and 50-step evaluation. Moreover, Agent S2 generalizes effectively to other operating systems and applications, surpassing previous best methods by 52.8% on WindowsAgentArena and by 16.52% on AndroidWorld relatively. Code available at https://github.com/simular-ai/Agent-S.
This study focuses on Embodied Complex-Question Answering task, which means the embodied robot need to understand human questions with intricate structures and abstract semantics. The core of this task lies in making appropriate plans based on the perception of the visual environment. Existing methods often generate plans in a once-for-all manner, i.e., one-step planning. Such approach rely on large models, without sufficient understanding of the environment. Considering multi-step planning, the framework for formulating plans in a sequential manner is proposed in this paper. To ensure the ability of our framework to tackle complex questions, we create a structured semantic space, where hierarchical visual perception and chain expression of the question essence can achieve iterative interaction. This space makes sequential task planning possible. Within the framework, we first parse human natural language based on a visual hierarchical scene graph, which can clarify the intention of the question. Then, we incorporate external rules to make a plan for current step, weakening the reliance on large models. Every plan is generated based on feedback from visual perception, with multiple rounds of interaction until an answer is obtained. This approach enables continuous feedback and adjustment, allowing the robot to optimize its action strategy. To test our framework, we contribute a new dataset with more complex questions. Experimental results demonstrate that our approach performs excellently and stably on complex tasks. And also, the feasibility of our approach in real-world scenarios has been established, indicating its practical applicability.
Reconstructing and decomposing dynamic urban scenes is crucial for autonomous driving, urban planning, and scene editing. However, existing methods fail to perform instance-aware decomposition without manual annotations, which is crucial for instance-level scene editing.We propose UnIRe, a 3D Gaussian Splatting (3DGS) based approach that decomposes a scene into a static background and individual dynamic instances using only RGB images and LiDAR point clouds. At its core, we introduce 4D superpoints, a novel representation that clusters multi-frame LiDAR points in 4D space, enabling unsupervised instance separation based on spatiotemporal correlations. These 4D superpoints serve as the foundation for our decomposed 4D initialization, i.e., providing spatial and temporal initialization to train a dynamic 3DGS for arbitrary dynamic classes without requiring bounding boxes or object templates.Furthermore, we introduce a smoothness regularization strategy in both 2D and 3D space, further improving the temporal stability.Experiments on benchmark datasets show that our method outperforms existing methods in decomposed dynamic scene reconstruction while enabling accurate and flexible instance-level editing, making it a practical solution for real-world applications.
This paper presents within an arable farming context a predictive logic for the on- and off-switching of a set of nozzles attached to a boom aligned along a working width and carried by a machinery with the purpose of applying spray along the working width while the machinery is traveling along a specific path planning pattern. Concatenation of multiple of those path patterns and corresponding concatenation of proposed switching logics enables nominal lossless spray application for area coverage tasks. Proposed predictive switching logic is compared to the common and state-of-the-art reactive switching logic for Boustrophedon-based path planning for area coverage. The trade-off between reduction in pathlength and increase in the number of required on- and off-switchings for proposed method is discussed.
Collision avoidance capability is an essential component in an autonomous vessel navigation system. To this end, an accurate prediction of dynamic obstacle trajectories is vital. Traditional approaches to trajectory prediction face limitations in generalizability and often fail to account for the intentions of other vessels. While recent research has considered incorporating the intentions of dynamic obstacles, these efforts are typically based on the own-ship's interpretation of the situation. The current state-of-the-art in this area is a Dynamic Bayesian Network (DBN) model, which infers target vessel intentions by considering multiple underlying causes and allowing for different interpretations of the situation by different vessels. However, since its inception, there have not been any significant structural improvements to this model. In this paper, we propose enhancing the DBN model by incorporating considerations for grounding hazards and vessel waypoint information. The proposed model is validated using real vessel encounters extracted from historical Automatic Identification System (AIS) data.
Unmanned aerial vehicle (UAV) assisted communication is a revolutionary technology that has been recently presented as a potential candidate for beyond fifth-generation millimeter wave (mmWave) communications. Although mmWaves can offer a notably high data rate, their high penetration and propagation losses mean that line of sight (LoS) is necessary for effective communication. Due to the presence of obstacles and user mobility, UAV trajectory planning plays a crucial role in improving system performance. In this work, we propose a novel computational geometry-based trajectory planning scheme by considering the user mobility, the priority of the delay sensitive ultra-reliable low-latency communications (URLLC) and the high throughput requirements of the enhanced mobile broadband (eMBB) traffic. Specifically, we use geometric tools like Apollonius circle and minimum enclosing ball of balls to find the optimal position of the UAV that supports uninterrupted connections to the URLLC users and maximizes the aggregate throughput of the eMBB users. Finally, the numerical results demonstrate the benefits of the suggested approach over an existing state of the art benchmark scheme in terms of sum throughput obtained by URLLC and eMBB users.
Clifford circuit optimization is an important step in the quantum compilation pipeline. Major compilers employ heuristic approaches. While they are fast, their results are often suboptimal. Minimization of noisy gates, like 2-qubit CNOT gates, is crucial for practical computing. Exact approaches have been proposed to fill the gap left by heuristic approaches. Among these are SAT based approaches that optimize gate count or depth, but they suffer from scalability issues. Further, they do not guarantee optimality on more important metrics like CNOT count or CNOT depth. A recent work proposed an exhaustive search only on Clifford circuits in a certain normal form to guarantee CNOT count optimality. But an exhaustive approach cannot scale beyond 6 qubits. In this paper, we incorporate search restricted to Clifford normal forms in a SAT encoding to guarantee CNOT count optimality. By allowing parallel plans, we propose a second SAT encoding that optimizes CNOT depth. By taking advantage of flexibility in SAT based approaches, we also handle connectivity restrictions in hardware platforms, and allow for qubit relabeling. We have implemented the above encodings and variations in our open source tool Q-Synth. In experiments, our encodings significantly outperform existing SAT approaches on random Clifford circuits. We consider practical VQE and Feynman benchmarks to compare with TKET and Qiskit compilers. In all-to-all connectivity, we observe reductions up to 32.1% in CNOT count and 48.1% in CNOT depth. Overall, we observe better results than TKET in the CNOT count and depth. We also experiment with connectivity restrictions of major quantum platforms. Compared to Qiskit, we observe up to 30.3% CNOT count and 35.9% CNOT depth further reduction.
Mobile robot navigation systems are increasingly relied upon in dynamic and complex environments, yet they often struggle with map inaccuracies and the resulting inefficient path planning. This paper presents MRHaD, a Mixed Reality-based Hand-drawn Map Editing Interface that enables intuitive, real-time map modifications through natural hand gestures. By integrating the MR head-mounted display with the robotic navigation system, operators can directly create hand-drawn restricted zones (HRZ), thereby bridging the gap between 2D map representations and the real-world environment. Comparative experiments against conventional 2D editing methods demonstrate that MRHaD significantly improves editing efficiency, map accuracy, and overall usability, contributing to safer and more efficient mobile robot operations. The proposed approach provides a robust technical foundation for advancing human-robot collaboration and establishing innovative interaction models that enhance the hybrid future of robotics and human society. For additional material, please check: https://mertcookimg.github.io/mrhad/
The Advanced Wakefield Experiment, AWAKE, is a well-established international collaboration and aims to develop the proton-driven plasma wakefield acceleration of electron bunches to energies and qualities suitable for first particle physics applications, such as strong-field QED and fixed target experiments ($\sim$50-200GeV). Numerical simulations show that these energies can be reached with an average accelerating gradient of $\sim1$GeV/m in a single proton-driven plasma wakefield stage. This is enabled by the high energy per particle and per bunch of the CERN SPS 19kJ, 400GeV and LHC ($\sim$120kJ, 7TeV) proton bunches. Bunches produced by synchrotrons are long, and AWAKE takes advantage of the self-modulation process to drive wakefields with GV/m amplitude. By the end of 2025, all physics concepts related to self-modulation will have been experimentally established as part of the AWAKE ongoing program that started in 2016. Key achievements include: direct observation of self-modulation, stabilization and control by two seeding methods, acceleration of externally injected electrons from 19MeV to more than 2GeV, and sustained high wakefield amplitudes beyond self-modulation saturation using a plasma density step. In addition to a brief summary of achievements reached so far, this document outlines the AWAKE roadmap as a demonstrator facility for producing beams with quality sufficient for first applications. The plan includes: 1) Accelerating a quality-controlled electron bunch to multi-GeV energies in a 10m plasma by 2031; 2) Demonstrating scalability to even higher energies by LS4. Synergies of the R&D performed in AWAKE that are relevant for advancing plasma wakefield acceleration in general are highlighted. We argue that AWAKE and similar advanced accelerator R&D be strongly supported by the European Strategy for Particle Physics Update.
Many robotics tasks, such as path planning or trajectory optimization, are formulated as optimal control problems (OCPs). The key to obtaining high performance lies in the design of the OCP's objective function. In practice, the objective function consists of a set of individual components that must be carefully modeled and traded off such that the OCP has the desired solution. It is often challenging to balance multiple components to achieve the desired solution and to understand, when the solution is undesired, the impact of individual cost components. In this paper, we present a framework addressing these challenges based on the concept of directional corrections. Specifically, given the solution to an OCP that is deemed undesirable, and access to an expert providing the direction of change that would increase the desirability of the solution, our method analyzes the individual cost components for their "consistency" with the provided directional correction. This information can be used to improve the OCP formulation, e.g., by increasing the weight of consistent cost components, or reducing the weight of - or even redesigning - inconsistent cost components. We also show that our framework can automatically tune parameters of the OCP to achieve consistency with a set of corrections.
Compressed sensing with subsampled unitary matrices benefits from \emph{optimized} sampling schemes, which feature improved theoretical guarantees and empirical performance relative to uniform subsampling. We provide, in a first of its kind in compressed sensing, theoretical guarantees showing that the error caused by the measurement noise vanishes with an increasing number of measurements for optimized sampling schemes, assuming that the noise is Gaussian. We moreover provide similar guarantees for measurements sampled with-replacement with arbitrary probability weights. All our results hold on prior sets contained in a union of low-dimensional subspaces. Finally, we demonstrate that this denoising behavior appears in empirical experiments with a rate that closely matches our theoretical guarantees when the prior set is the range of a generative ReLU neural network and when it is the set of sparse vectors.
Hot exozodiacal dust is dust in the innermost regions of planetary systems, at temperatures around 1000K to 2000K, and commonly detected by near-infrared interferometry. The phenomenon is poorly understood and has received renewed attention as a potential risk to a planned future space mission to image potentially habitable exoplanets and characterize their atmospheres (exo-Earth imaging) such as the Habitable Worlds Observatory (HWO). In this article, we review the current understanding of hot exozodiacal dust and its implications for HWO. We argue that the observational evidence suggests that the phenomenon is most likely real and indeed caused by hot dust, although conclusive proof in particular of the latter statement is still missing. Furthermore, we find that there exists as of yet no single model that is able to successfully explain the presence of the dust. We find that it is plausible and not unlikely that large amounts of hot exozodiacal dust in a system will critically limit the sensitivity of exo-Earth imaging observations around that star. It is thus crucial to better understood the phenomenon in order to be able to evaluate the actual impact on such a mission, and current and near-future observational opportunities for acquiring the required data exist. At the same time, hot exozodiacal dust (and warm exozodiacal dust closer to a system's habitable zone) has the potential to provide important context for HWO observations of rocky, HZ planets, constraining the environment in which these planets exist and hence to determine why a detected planet may be capable to sustain life or not.