We introduce MMBench-GUI, a hierarchical benchmark for evaluating GUI automation agents across Windows, macOS, Linux, iOS, Android, and Web platforms. It comprises four levels: GUI Content Understanding, Element Grounding, Task Automation, and Task Collaboration, covering essential skills for GUI agents. In addition, we propose a novel Efficiency-Quality Area (EQA) metric to assess GUI agent execution efficiency in online automation scenarios. Through MMBench-GUI, we identify accurate visual grounding as a critical determinant of overall task success, emphasizing the substantial benefits of modular frameworks that integrate specialized grounding modules. Furthermore, to achieve reliable GUI automation, an agent requires strong task planning and cross-platform generalization abilities, with long-context memory, a broad action space, and long-term reasoning playing a critical role. More important, task efficiency remains a critically underexplored dimension, and all models suffer from substantial inefficiencies, with excessive redundant steps even when tasks are ultimately completed. The integration of precise localization, effective planning, and early stopping strategies is indispensable to enable truly efficient and scalable GUI automation. Our benchmark code, evaluation data, and running environment will be publicly available at https://github.com/open-compass/MMBench-GUI.
We present DINO-world, a powerful generalist video world model trained to predict future frames in the latent space of DINOv2. By leveraging a pre-trained image encoder and training a future predictor on a large-scale uncurated video dataset, DINO-world learns the temporal dynamics of diverse scenes, from driving and indoor scenes to simulated environments. We show that DINO-world outperforms previous models on a variety of video prediction benchmarks, e.g. segmentation and depth forecasting, and demonstrates strong understanding of intuitive physics. Furthermore, we show that it is possible to fine-tune the predictor on observation-action trajectories. The resulting action-conditioned world model can be used for planning by simulating candidate trajectories in latent space.
Budget planning and maintenance optimization are crucial for infrastructure asset management, ensuring cost-effectiveness and sustainability. However, the complexity arising from combinatorial action spaces, diverse asset deterioration, stringent budget constraints, and environmental uncertainty significantly limits existing methods' scalability. This paper proposes a Hierarchical Deep Reinforcement Learning methodology specifically tailored to multi-year infrastructure planning. Our approach decomposes the problem into two hierarchical levels: a high-level Budget Planner allocating annual budgets within explicit feasibility bounds, and a low-level Maintenance Planner prioritizing assets within the allocated budget. By structurally separating macro-budget decisions from asset-level prioritization and integrating linear programming projection within a hierarchical Soft Actor-Critic framework, the method efficiently addresses exponential growth in the action space and ensures rigorous budget compliance. A case study evaluating sewer networks of varying sizes (10, 15, and 20 sewersheds) illustrates the effectiveness of the proposed approach. Compared to conventional Deep Q-Learning and enhanced genetic algorithms, our methodology converges more rapidly, scales effectively, and consistently delivers near-optimal solutions even as network size grows.
Occupancy is crucial for autonomous driving, providing essential geometric priors for perception and planning. However, existing methods predominantly rely on LiDAR-based occupancy annotations, which limits scalability and prevents leveraging vast amounts of potential crowdsourced data for auto-labeling. To address this, we propose GS-Occ3D, a scalable vision-only framework that directly reconstructs occupancy. Vision-only occupancy reconstruction poses significant challenges due to sparse viewpoints, dynamic scene elements, severe occlusions, and long-horizon motion. Existing vision-based methods primarily rely on mesh representation, which suffer from incomplete geometry and additional post-processing, limiting scalability. To overcome these issues, GS-Occ3D optimizes an explicit occupancy representation using an Octree-based Gaussian Surfel formulation, ensuring efficiency and scalability. Additionally, we decompose scenes into static background, ground, and dynamic objects, enabling tailored modeling strategies: (1) Ground is explicitly reconstructed as a dominant structural element, significantly improving large-area consistency; (2) Dynamic vehicles are separately modeled to better capture motion-related occupancy patterns. Extensive experiments on the Waymo dataset demonstrate that GS-Occ3D achieves state-of-the-art geometry reconstruction results. By curating vision-only binary occupancy labels from diverse urban scenes, we show their effectiveness for downstream occupancy models on Occ3D-Waymo and superior zero-shot generalization on Occ3D-nuScenes. It highlights the potential of large-scale vision-based occupancy reconstruction as a new paradigm for autonomous driving perception. Project Page: https://gs-occ3d.github.io/
This study aims to establish an analytical model that reproduces the gravitational field around non-spherical bodies with constant density. Due to the non-spherical geometry of such bodies, their gravitational potential is disturbed relative to a central field. By considering the body as a polyhedron and decomposing it into tetrahedral elements, we use the Series Potential Expansion Method (PSEM) to approximate the total potential by summing the potentials of each tetrahedron. While this model does not offer higher accuracy than the classical polyhedral approach, it achieves relative errors below 0.1\% for points outside the body when developed to higher orders (e.g., orders 11 and 12), and significantly reduces execution time. To validate this approach, we apply our model to asteroids (87) Sylvia, (101955) Bennu, (99942) Apophis, and (25143) Itokawa. We determine equilibrium points, analyze stability, investigate zero-velocity planes, and calculate the relative errors between the gravitational field modeled by PSEM and the results obtained using both the classical polyhedral method by Tsoulis and Petrovic and the mass concentration method. Our results highlight the computational efficiency of PSEM in modeling the gravitational potential of irregularly shaped bodies. This efficiency stems from expressing the gravitational potential through a homogeneous analytical function that is easy to manipulate algebraically, enabling explicit determination of the acceleration vector. Our model provides a robust framework for more complex analyses, such as studying periodic orbits around non-spherical celestial bodies, assessing their stability, and planning the smooth landing trajectories of spacecraft.
We propose an approach to trajectory optimization for piecewise polynomial systems based on the recently proposed graphs of convex sets framework. We instantiate the framework with a convex relaxation of optimal control based on occupation measures, resulting in a convex optimization problem resembling the discrete shortest-paths linear program that can be solved efficiently to global optimality. While this approach inherits the limitations of semidefinite programming, scalability to large numbers of discrete modes improves compared to the NP-hard mixed-integer formulation. We use this to plan trajectories under temporal logic specifications, comparing the computed cost lower bound to a nonconvex optimization approach with fixed mode sequence. In our numerical experiments, we find that this bound is typically in the vicinity of the nonconvex solution, while the runtime speedup is significant compared to the often intractable mixed-integer formulation. Our implementation is available at https://github.com/ebuehrle/hpoc.
Automatic classification of Diabetic Retinopathy (DR) can assist ophthalmologists in devising personalized treatment plans, making it a critical component of clinical practice. However, imbalanced data distribution in the dataset becomes a bottleneck in the generalization of deep learning models trained for DR classification. In this work, we combine global attention block (GAB) and category attention block (CAB) into the deep learning model, thus effectively overcoming the imbalanced data distribution problem in DR classification. Our proposed approach is based on an attention mechanism-based deep learning model that employs three pre-trained networks, namely, MobileNetV3-small, Efficientnet-b0, and DenseNet-169 as the backbone architecture. We evaluate the proposed method on two publicly available datasets of retinal fundoscopy images for DR. Experimental results show that on the APTOS dataset, the DenseNet-169 yielded 83.20% mean accuracy, followed by the MobileNetV3-small and EfficientNet-b0, which yielded 82% and 80% accuracies, respectively. On the EYEPACS dataset, the EfficientNet-b0 yielded a mean accuracy of 80%, while the DenseNet-169 and MobileNetV3-small yielded 75.43% and 76.68% accuracies, respectively. In addition, we also compute the F1-score of 82.0%, precision of 82.1%, sensitivity of 83.0%, specificity of 95.5%, and a kappa score of 88.2% for the experiments. Moreover, in our work, the MobileNetV3-small has 1.6 million parameters on the APTOS dataset and 0.90 million parameters on the EYEPACS dataset, which is comparatively less than other methods. The proposed approach achieves competitive performance that is at par with recently reported works on DR classification.
Dynamic low altitude networks offer significant potential for efficient and reliable data transport via unmanned aerial vehicles (UAVs) relays which usually operate with predetermined trajectories. However, it is challenging to optimize the data routing and resource allocation due to the time-varying topology and the need to control interference with terrestrial systems. Traditional schemes rely on time-expanded graphs with uniform and fine time subdivisions, making them impractical for interference-aware applications. This paper develops a dynamic space-time graph model with a cross-layer optimization framework that converts a joint routing and predictive resource allocation problem into a joint bottleneck path planning and resource allocation problem. We develop explicit deterministic bounds to handle the channel uncertainty and prove a monotonicity property in the problem structure that enables us to efficiently reach the globally optimal solution to the predictive resource allocation subproblem. Then, this approach is extended to multi-commodity transmission tasks through time-frequency allocation, and a bisection search algorithm is developed to find the optimum solution by leveraging the monotonicity of the feasible set family. Simulations verify that the single-commodity algorithm approaches global optimality with more than 30 dB performance gain over the classical graph-based methods for delay-sensitive and large data transportation. At the same time, the multi-commodity method achieves 100X improvements in dense service scenarios and enables an additional 20 dB performance gain by data segmenting.
In the context of urban traffic control, traffic signal optimisation is the problem of determining the optimal green length for each signal in a set of traffic signals. The literature has effectively tackled such a problem, mostly with automated planning techniques leveraging the PDDL+ language and solvers. However, such language has limitations when it comes to specifying optimisation statements and computing optimal plans. In this paper, we provide an alternative solution to the traffic signal optimisation problem based on Constraint Answer Set Programming (CASP). We devise an encoding in a CASP language, which is then solved by means of clingcon 3, a system extending the well-known ASP solver clingo. We performed experiments on real historical data from the town of Huddersfield in the UK, comparing our approach to the PDDL+ model that obtained the best results for the considered benchmark. The results showed the potential of our approach for tackling the traffic signal optimisation problem and improving the solution quality of the PDDL+ plans.
A search for a non-zero Electric Dipole Moment (EDM) of particles, which is a clear signal of a violation of CP symmetry, is one of the unique ways to discover physics beyond the Standard Model. In this paper, we discuss a method for determining the EDM of baryons from the full angular distribution of final particles in electron-positron pair annihilation processes. The question of how accurately the state-of-the-art experiments can determine EDM of $\Lambda$ and $\Lambda_c^+$ baryons is discussed in detail. Investigating the pseudo-statistics that corresponds to the BESIII experiment, the estimated sensitivity for $\Lambda$ EDM is obtained at the level of $10^{-18}$ e$\cdot$cm. The similar figure for the proposed Super Tau-Charm Facility (STCF) experiment is found to be of the order of $10^{-20}$ e$\cdot$cm. For the $\Lambda_c^+$ EDM, the calculated sensitivity for the STCF experiment is $10^{-16}$ e$\cdot$cm. The case of a polarized initial electron is considered separately as such an option is planned at the STCF experiment.
Medical imaging plays a critical role in modern healthcare, enabling clinicians to accurately diagnose diseases and develop effective treatment plans. However, noise, often introduced by imaging devices, can degrade image quality, leading to misinterpretation and compromised clinical outcomes. Existing denoising approaches typically rely either on noise characteristics or on contextual information from the image. Moreover, they are commonly developed and evaluated for a single imaging modality and noise type. Motivated by Geng et.al CNCL, which integrates both noise and context, this study introduces a Dual-Pathway Learning (DPL) model architecture that effectively denoises medical images by leveraging both sources of information and fusing them to generate the final output. DPL is evaluated across multiple imaging modalities and various types of noise, demonstrating its robustness and generalizability. DPL improves PSNR by 3.35% compared to the baseline UNet when evaluated on Gaussian noise and trained across all modalities. The code is available at 10.5281/zenodo.15836053.
We present Lisp-Z3, an extension to the ACL2s systems programming framework (ASPF) that supports the use of the Z3 satisfiability modulo theories (SMT) solver. Lisp-Z3 allows one to develop tools written using the full feature set of Common Lisp that can use both ACL2/s (either ACL2 or ACL2s) and Z3 as services, combining the power of SMT and interactive theorem proving. Lisp-Z3 is usable by anyone who would like to interact with Z3 from Common Lisp, as it does not depend on the availability of ACL2/s. We discuss the use of Lisp-Z3 in three applications. The first is a Sudoku solver. The second is SeqSolve, a string solver which solved a larger number of benchmark problems more quickly than any other existing solver at the time of its publishing. Finally, Lisp-Z3 was also used in the context of hardware-in-the-loop fuzzing of wireless routers, where low latency was an important goal. The latter two applications leveraged the ability of Lisp-Z3 to integrate Z3 with ACL2s code. We have further plans to use Lisp-Z3 inside of ACL2s to provide more powerful automated support for dependent types, and in particular more efficient generation of counterexamples to properties involving dependent types. This paper describes the usage and implementation of Lisp-Z3, as well as an evaluation of its use in the aforementioned applications.
Self organized polar textures can occur in ferroelectric materials across multiple length scales, from nanometer scale vortices and skyrmions, to mesoscopic stripe domains, and macroscopic twin patterns, making these phenomena central to condensed matter physics and nanotechnology. Silicon compatible ferroelectrics such as HfO2 and ZrO2 spontaneously form alternating stacks of two dimensional (2D) polar and nonpolar half unit cell layers, effectively confining dipoles to isolated, single atomic plane layers. However, the arrangement of dipoles within each polar plane is generally considered uniform. Here, by utilizing scanning transmission electron microscopy (STEM) of an ultrathin ZrO2 film in the plan view orientation, we show that within these irreducibly narrow polar layers, the dipole organization can be strikingly non-uniform, forming atomically thin, dimensionally confined, charged 180 degree domain walls, at most a few unit cells long, alternating between head to head and tail to tail configurations. Head to head and tail to tail walls each adopt completely distinctive interfacial structures and confine the in-plane domains to a sub nm2 footprint, making them one of the smallest domains to be reported in any polar material. This work represents the first experimental observation of antipolar ferroic ordering via strongly charged domain walls, while being nested within the self organized polar nonpolar layering, revealing a novel hierarchical self-organization of polar textures at the atomic scale, and opening new pathways to atomically dense memories and domain wall nanoelectronics in silicon compatible, simple binary oxides.
Brachytherapy involves bringing a radioactive source near tumor tissue using implanted needles. Image-guided brachytherapy planning requires amongst others, the reconstruction of the needles. Manually annotating these needles on patient images can be a challenging and time-consuming task for medical professionals. For automatic needle reconstruction, a two-stage pipeline is commonly adopted, comprising a segmentation stage followed by a post-processing stage. While deep learning models are effective for segmentation, their results often contain errors. No currently existing post-processing technique is robust to all possible segmentation errors. We therefore propose adaptations to existing post-processing techniques mainly aimed at dealing with segmentation errors and thereby improving the reconstruction accuracy. Experiments on a prostate cancer dataset, based on MRI scans annotated by medical professionals, demonstrate that our proposed adaptations can help to effectively manage segmentation errors, with the best adapted post-processing technique achieving median needle-tip and needle-bottom point localization errors of $1.07$ (IQR $\pm 1.04$) mm and $0.43$ (IQR $\pm 0.46$) mm, respectively, and median shaft error of $0.75$ (IQR $\pm 0.69$) mm with 0 false positive and 0 false negative needles on a test set of 261 needles.
Traditional long-term microgrid planning models assume constant power charging for battery energy storage systems (BESS), overlooking efficiency losses that occur toward the end of charge due to rising internal resistance. While this issue can be mitigated at the cell level using constant current-constant voltage (CCCV) charging, it is impractical at the pack level in large-scale systems. However, battery management systems and inverter controls can emulate this effect by tapering charging power at high state-of-charge (SOC) levels, trading off charging speed for improved efficiency and reduced thermal stress. Ignoring this behavior in planning models can lead to undersized batteries and potential reliability issues. This paper proposes a tractable and scalable approach to approximate CCCV behavior using SOC-dependent tapered charging power (TCP) constraints. A MATLAB-based proof of concept demonstrates the energy delivery and efficiency benefits of tapering. The method is integrated into a long-term planning framework and evaluated under a synthetic load and solar profile. Results show tapering significantly affects BESS sizing, cost, and reliability under dynamic operating conditions that demand fast charging. These findings highlight tapering as a critical modeling factor for accurately capturing BESS performance in long-term microgrid planning.
Overtaking in high-speed autonomous racing demands precise, real-time estimation of collision risk; particularly in wheel-to-wheel scenarios where safety margins are minimal. Existing methods for collision risk estimation either rely on simplified geometric approximations, like bounding circles, or perform Monte Carlo sampling which leads to overly conservative motion planning behavior at racing speeds. We introduce the Gauss-Legendre Rectangle (GLR) algorithm, a principled two-stage integration method that estimates collision risk by combining Gauss-Legendre with a non-homogeneous Poisson process over time. GLR produces accurate risk estimates that account for vehicle geometry and trajectory uncertainty. In experiments across 446 overtaking scenarios in a high-fidelity Formula One racing simulation, GLR outperforms five state-of-the-art baselines achieving an average error reduction of 77% and surpassing the next-best method by 52%, all while running at 1000 Hz. The framework is general and applicable to broader motion planning contexts beyond autonomous racing.
This study focuses on the development of a simulation-driven reinforcement learning (RL) framework for optimizing routing decisions in complex queueing network systems, with a particular emphasis on manufacturing and communication applications. Recognizing the limitations of traditional queueing methods, which often struggle with dynamic, uncertain environments, we propose a robust RL approach leveraging Deep Deterministic Policy Gradient (DDPG) combined with Dyna-style planning (Dyna-DDPG). The framework includes a flexible and configurable simulation environment capable of modeling diverse queueing scenarios, disruptions, and unpredictable conditions. Our enhanced Dyna-DDPG implementation incorporates separate predictive models for next-state transitions and rewards, significantly improving stability and sample efficiency. Comprehensive experiments and rigorous evaluations demonstrate the framework's capability to rapidly learn effective routing policies that maintain robust performance under disruptions and scale effectively to larger network sizes. Additionally, we highlight strong software engineering practices employed to ensure reproducibility and maintainability of the framework, enabling practical deployment in real-world scenarios.
Infrastructure asset management is essential for sustaining the performance of public infrastructure such as road networks, bridges, and utility networks. Traditional maintenance and rehabilitation planning methods often face scalability and computational challenges, particularly for large-scale networks with thousands of assets under budget constraints. This paper presents a novel deep reinforcement learning (DRL) framework that optimizes asset management strategies for large infrastructure networks. By decomposing the network-level Markov Decision Process (MDP) into individual asset-level MDPs while using a unified neural network architecture, the proposed framework reduces computational complexity, improves learning efficiency, and enhances scalability. The framework directly incorporates annual budget constraints through a budget allocation mechanism, ensuring maintenance plans are both optimal and cost-effective. Through a case study on a large-scale pavement network of 68,800 segments, the proposed DRL framework demonstrates significant improvements over traditional methods like Progressive Linear Programming and genetic algorithms, both in efficiency and network performance. This advancement contributes to infrastructure asset management and the broader application of reinforcement learning in complex, large-scale environments.
We present Captain Cinema, a generation framework for short movie generation. Given a detailed textual description of a movie storyline, our approach firstly generates a sequence of keyframes that outline the entire narrative, which ensures long-range coherence in both the storyline and visual appearance (e.g., scenes and characters). We refer to this step as top-down keyframe planning. These keyframes then serve as conditioning signals for a video synthesis model, which supports long context learning, to produce the spatio-temporal dynamics between them. This step is referred to as bottom-up video synthesis. To support stable and efficient generation of multi-scene long narrative cinematic works, we introduce an interleaved training strategy for Multimodal Diffusion Transformers (MM-DiT), specifically adapted for long-context video data. Our model is trained on a specially curated cinematic dataset consisting of interleaved data pairs. Our experiments demonstrate that Captain Cinema performs favorably in the automated creation of visually coherent and narrative consistent short movies in high quality and efficiency. Project page: https://thecinema.ai
Reinforcement learning (RL) provides a principled framework for decision-making in partially observable environments, which can be modeled as Markov decision processes and compactly represented through dynamic decision Bayesian networks. Recent advances demonstrate that inference on sparse Bayesian networks can be accelerated using quantum rejection sampling combined with amplitude amplification, leading to a computational speedup in estimating acceptance probabilities.\\ Building on this result, we introduce Quantum Bayesian Reinforcement Learning (QBRL), a hybrid quantum-classical look-ahead algorithm for model-based RL in partially observable environments. We present a rigorous, oracle-free time complexity analysis under fault-tolerant assumptions for the quantum device. Unlike standard treatments that assume a black-box oracle, we explicitly specify the inference process, allowing our bounds to more accurately reflect the true computational cost. We show that, for environments whose dynamics form a sparse Bayesian network, horizon-based near-optimal planning can be achieved sub-quadratically faster through quantum-enhanced belief updates. Furthermore, we present numerical experiments benchmarking QBRL against its classical counterpart on simple yet illustrative decision-making tasks. Our results offer a detailed analysis of how the quantum computational advantage translates into decision-making performance, highlighting that the magnitude of the advantage can vary significantly across different deployment settings.
Processing spatial data is a key component in many learning tasks for autonomous driving such as motion forecasting, multi-agent simulation, and planning. Prior works have demonstrated the value in using SE(2) invariant network architectures that consider only the relative poses between objects (e.g. other agents, scene features such as traffic lanes). However, these methods compute the relative poses for all pairs of objects explicitly, requiring quadratic memory. In this work, we propose a mechanism for SE(2) invariant scaled dot-product attention that requires linear memory relative to the number of objects in the scene. Our SE(2) invariant transformer architecture enjoys the same scaling properties that have benefited large language models in recent years. We demonstrate experimentally that our approach is practical to implement and improves performance compared to comparable non-invariant architectures.
In recent years, Compressed Sensing (CS) has gained significant interest as a technique for acquiring high-resolution sensory data using fewer measurements than traditional Nyquist sampling requires. At the same time, autonomous robotic platforms such as drones and rovers have become increasingly popular tools for remote sensing and environmental monitoring tasks, including measurements of temperature, humidity, and air quality. Within this context, this paper presents, to the best of our knowledge, the first investigation into how the structure of CS measurement matrices can be exploited to design optimized sampling trajectories for robotic environmental data collection. We propose a novel Monte Carlo optimization framework that generates measurement matrices designed to minimize both the robot's traversal path length and the signal reconstruction error within the CS framework. Central to our approach is the application of Dictionary Learning (DL) to obtain a data-driven sparsifying transform, which enhances reconstruction accuracy while further reducing the number of samples that the robot needs to collect. We demonstrate the effectiveness of our method through experiments reconstructing $NO_2$ pollution maps over the Gulf region. The results indicate that our approach can reduce robot travel distance to less than $10\%$ of a full-coverage path, while improving reconstruction accuracy by over a factor of five compared to traditional CS methods based on DCT and polynomial dictionaries, as well as by a factor of two compared to previously-proposed Informative Path Planning (IPP) methods.
This study proposes and evaluates the PAnoramic Learning Map (PALM), a learning analytics (LA) dashboard designed to address the scalability challenges of LA by integrating curriculum-level information. Traditional LA research has predominantly focused on individual courses or learners and often lacks a framework that considers the relationships between courses and the long-term trajectory of learning. To bridge this gap, PALM was developed to integrate multilayered educational data into a curriculum map, enabling learners to intuitively understand their learning records and academic progression. We conducted a system evaluation to assess PALM's effectiveness in two key areas: (1) its impact on students' awareness of their learning behaviors, and (2) its comparative performance against existing systems. The results indicate that PALM enhances learners' awareness of study planning and reflection, particularly by improving perceived behavioral control through the visual presentation of individual learning histories and statistical trends, which clarify the links between learning actions and outcomes. Although PALM requires ongoing refinement as a system, it received significantly higher evaluations than existing systems in terms of visual appeal and usability. By serving as an information resource with previously inaccessible insights, PALM enhances self-regulated learning and engagement, representing a significant step beyond conventional LA toward a comprehensive and scalable approach.
Tritium is a well-known byproduct of particle accelerator operations. To keep levels of tritium below regulatory limits, tritium production is actively monitored and managed at Fermilab. We plan to study tritium production in the targets, beamline components, and shielding elements of the Fermilab facilities such as NuMI, BNB, and MI-65. To facilitate the analysis, we construct a simple model and use three Monte Carlo radiation codes, FLUKA, MARS, and PHITS, to estimate the amount of tritium produced in these facilities. The analysis could also serve as an intercomparison between these code results related to tritium production. To assess the actual amounts of tritium that would be released from various materials, we employ a semi-empirical diffusion model. The results of this analysis are compared to experimental data whenever possible. This approach also helps to optimize proposed target materials with respect to the tritium production and release.
Semantics-driven 3D spatial constraints align highlevel semantic representations with low-level action spaces, facilitating the unification of task understanding and execution in robotic manipulation. The synergistic reasoning of Multimodal Large Language Models (MLLMs) and Vision Foundation Models (VFMs) enables cross-modal 3D spatial constraint construction. Nevertheless, existing methods have three key limitations: (1) coarse semantic granularity in constraint modeling, (2) lack of real-time closed-loop planning, (3) compromised robustness in semantically diverse environments. To address these challenges, we propose ReSem3D, a unified manipulation framework for semantically diverse environments, leveraging the synergy between VFMs and MLLMs to achieve fine-grained visual grounding and dynamically constructs hierarchical 3D spatial constraints for real-time manipulation. Specifically, the framework is driven by hierarchical recursive reasoning in MLLMs, which interact with VFMs to automatically construct 3D spatial constraints from natural language instructions and RGB-D observations in two stages: part-level extraction and region-level refinement. Subsequently, these constraints are encoded as real-time optimization objectives in joint space, enabling reactive behavior to dynamic disturbances. Extensive simulation and real-world experiments are conducted in semantically rich household and sparse chemical lab environments. The results demonstrate that ReSem3D performs diverse manipulation tasks under zero-shot conditions, exhibiting strong adaptability and generalization. Code and videos are available at https://github.com/scy-v/ReSem3D and https://resem3d.github.io.
In this paper, we present a subsystem, using Unmanned Aerial Vehicles (UAV), for search and rescue missions, focusing on people detection, face recognition and tracking of identified individuals. The proposed solution integrates a UAV with ROS2 framework, that utilizes multiple convolutional neural networks (CNN) for search missions. System identification and PD controller deployment are performed for autonomous UAV navigation. The ROS2 environment utilizes the YOLOv11 and YOLOv11-pose CNNs for tracking purposes, and the dlib library CNN for face recognition. The system detects a specific individual, performs face recognition and starts tracking. If the individual is not yet known, the UAV operator can manually locate the person, save their facial image and immediately initiate the tracking process. The tracking process relies on specific keypoints identified on the human body using the YOLOv11-pose CNN model. These keypoints are used to track a specific individual and maintain a safe distance. To enhance accurate tracking, system identification is performed, based on measurement data from the UAVs IMU. The identified system parameters are used to design PD controllers that utilize YOLOv11-pose to estimate the distance between the UAVs camera and the identified individual. The initial experiments, conducted on 14 known individuals, demonstrated that the proposed subsystem can be successfully used in real time. The next step involves implementing the system on a large experimental UAV for field use and integrating autonomous navigation with GPS-guided control for rescue operations planning.
Large renewable penetration has been witnessed in power systems, resulting in reduced levels of system inertia and increasing requirements for frequency response services. There have been plenty of studies developing frequency-constrained models for power system security. However, most existing literature only considers uniform frequency security, while neglecting frequency spatial differences in different regions. To fill this gap, this paper proposes a novel planning model for the optimal sizing problem of power systems, capturing regional frequency security and inter-area frequency oscillations. Specifically, regional frequency constraints are first extracted via an enhanced input convex neural network (ICNN) and then embedded into the original optimisation for frequency security, where a principled weight initialisation strategy is adopted to deal with the gradient vanishing issues of non-negative weights in traditional ICNNs and enhance its fitting ability. An adaptive genetic algorithm with sparsity calculation and local search is developed to separate the planning model into two stages and effectively solve it iteratively. Case studies have been conducted on three different power systems to verify the effectiveness of the proposed frequency-constrained planning model in ensuring regional system security and obtaining realistic investment decisions.
Land Use Land Cover (LULC) mapping is essential for urban and resource planning, and is one of the key elements in developing smart and sustainable cities.This study evaluates advanced LULC mapping techniques, focusing on Look-Up Table (LUT)-based Atmospheric Correction applied to Cartosat Multispectral (MX) sensor images, followed by supervised and semi-supervised learning models for LULC prediction. We explore DeeplabV3+ and Cross-Pseudo Supervision (CPS). The CPS model is further refined with dynamic weighting, enhancing pseudo-label reliability during training. This comprehensive approach analyses the accuracy and utility of LULC mapping techniques for various urban planning applications. A case study of Hyderabad, India, illustrates significant land use changes due to rapid urbanization. By analyzing Cartosat MX images over time, we highlight shifts such as urban sprawl, shrinking green spaces, and expanding industrial areas. This demonstrates the practical utility of these techniques for urban planners and policymakers.
Power systems decarbonization are at the focal point of the clean energy transition. While system operators and utility companies increasingly publicize system-level carbon emission information, it remains unclear how emissions from individual generators are transported through the grid and how they impact electricity users at specific locations. This paper presents a novel and computationally efficient approach for exact quantification of nodal average and marginal carbon emission rates, applicable to both AC and DC optimal power flow problems. The approach leverages graph-based topological sorting and directed cycle removal techniques, applied to directed graphs formed by generation dispatch and optimal power flow solutions. Our proposed algorithm efficiently identifies each generator's contribution to each node, capturing how emissions are spatially distributed under varying system conditions. To validate its effectiveness and reveal locational and temporal emission patterns in the real world, we simulate the 8,870-bus realistic California grid using actual CAISO data and the CATS model. Based on year long hourly data on nodal loads and renewable generation, obtained or estimated from CAISO public data, our method accurately estimates power flow conditions, generation mixes, and systemwide emissions, and delivers fine grained spatiotemporal emission analysis for every California county. Both our algorithm and the California study are open-sourced, providing a foundation for future research on grid emissions, planning, operations, and energy policy.
This research full paper investigates the factors influencing computing educators' adoption of project-based learning (PjBL) in software engineering and computing curricula. Recognized as a student-centered pedagogical approach, PjBL has the potential to enhance student motivation, engagement, critical thinking, collaboration, and problem-solving skills. Despite these benefits, faculty adoption remains inconsistent due to challenges such as insufficient institutional support, time constraints, limited training opportunities, designing or sourcing projects, and aligning them with course objectives. This research explores these barriers and investigates the strategies and resources that facilitate a successful adoption. Using a mixed-methods approach, data from 80 computing faculty were collected through an online survey comprising closed-ended questions to quantify barriers, enablers, and resource needs, along with an open-ended question to gather qualitative insights. Quantitative data were analyzed using statistical methods, while qualitative responses underwent thematic analysis. Results reveal that while PjBL is widely valued, its adoption is often selective and impacted by challenges in planning and managing the learning process, designing suitable projects, and a lack of institutional support, such as time, funding, and teaching assistants. Faculty are more likely to adopt or sustain PjBL when they have access to peer collaboration, professional development, and institutional incentives. In addition, sourcing projects from research, industry partnerships, and borrowing from peers emerged as key facilitators for new projects. These findings underscore the need for systemic support structures to empower faculty to experiment with and scale PjBL practices.