Many real-world scenarios require multiple agents to coordinate in shared environments, while balancing trade-offs between multiple, potentially competing objectives. Current multi-objective multi-agent path finding (MO-MAPF) algorithms typically produce conflict-free plans by computing Pareto frontiers. They do not explicitly optimize for user-defined preferences, even when the preferences are available, and scale poorly with the number of objectives. We propose a lexicographic framework for modeling MO-MAPF, along with an algorithm \textit{Lexicographic Conflict-Based Search} (LCBS) that directly computes a single solution aligned with a lexicographic preference over objectives. LCBS integrates a priority-aware low-level $A^*$ search with conflict-based search, avoiding Pareto frontier construction and enabling efficient planning guided by preference over objectives. We provide insights into optimality and scalability, and empirically demonstrate that LCBS computes optimal solutions while scaling to instances with up to ten objectives -- far beyond the limits of existing MO-MAPF methods. Evaluations on standard and randomized MAPF benchmarks show consistently higher success rates against state-of-the-art baselines, especially with increasing number of objectives.
Offline goal-conditioned reinforcement learning (GCRL) trains policies that reach user-specified goals at test time, providing a simple, unsupervised, domain-agnostic way to extract diverse behaviors from unlabeled, reward-free datasets. Nonetheless, long-horizon decision making remains difficult for GCRL agents due to temporal credit assignment and error accumulation, and the offline setting amplifies these effects. To alleviate this issue, we introduce Test-Time Graph Search (TTGS), a lightweight planning approach to solve the GCRL task. TTGS accepts any state-space distance or cost signal, builds a weighted graph over dataset states, and performs fast search to assemble a sequence of subgoals that a frozen policy executes. When the base learner is value-based, the distance is derived directly from the learned goal-conditioned value function, so no handcrafted metric is needed. TTGS requires no changes to training, no additional supervision, no online interaction, and no privileged information, and it runs entirely at inference. On the OGBench benchmark, TTGS improves success rates of multiple base learners on challenging locomotion tasks, demonstrating the benefit of simple metric-guided test-time planning for offline GCRL.
We present a novel hybrid learning-assisted planning method, named HyPlan, for solving the collision-free navigation problem for self-driving cars in partially observable traffic environments. HyPlan combines methods for multi-agent behavior prediction, deep reinforcement learning with proximal policy optimization and approximated online POMDP planning with heuristic confidence-based vertical pruning to reduce its execution time without compromising safety of driving. Our experimental performance analysis on the CARLA-CTS2 benchmark of critical traffic scenarios with pedestrians revealed that HyPlan may navigate safer than selected relevant baselines and perform significantly faster than considered alternative online POMDP planners.
The spatial-temporal imbalance between supply and demand in shared micro-mobility services often leads to observed demand being censored, resulting in incomplete records of the underlying real demand. This phenomenon undermines the reliability of the collected demand data and hampers downstream applications such as demand forecasting, fleet management, and micro-mobility planning. How to accurately estimate the real demand is challenging and has not been well explored in existing studies. In view of this, we contribute to real demand estimation for shared micro-mobility services by proposing an analytical method that rigorously derives the real demand under appropriate assumptions. Rather than directly modeling the intractable relationship between observed demand and real demand, we propose a novel random variable, Generalized Vehicle Survival Time (GVST), which is observable from trip records. The relationship between GVST and real demand is characterized by introducing a flipped queueing model (FQM) that captures the operational dynamics of shared micro-mobility services. Specifically, the distribution of GVST is derived within the FQM, which allows the real demand estimation problem to be transformed into an inverse queueing problem. We analytically derive the real demand in closed form using a one-sided estimation method, and solve the problem by a system of equations in a two-sided estimation method. We validate the proposed methods using synthetic data and conduct empirical analyses using real-world datasets from bike-sharing and shared e-scooter systems. The experimental results show that both the two-sided and one-sided methods outperform benchmark models. In particular, the one-sided approach provides a closed-form solution that delivers acceptable accuracy, constituting a practical rule of thumb for demand-related analytics and decision-making processes.
Multi-fingered hands are emerging as powerful platforms for performing fine manipulation tasks, including tool use. However, environmental perturbations or execution errors can impede task performance, motivating the use of recovery behaviors that enable normal task execution to resume. In this work, we take advantage of recent advances in diffusion models to construct a framework that autonomously identifies when recovery is necessary and optimizes contact-rich trajectories to recover. We use a diffusion model trained on the task to estimate when states are not conducive to task execution, framed as an out-of-distribution detection problem. We then use diffusion sampling to project these states in-distribution and use trajectory optimization to plan contact-rich recovery trajectories. We also propose a novel diffusion-based approach that distills this process to efficiently diffuse the full parameterization, including constraints, goal state, and initialization, of the recovery trajectory optimization problem, saving time during online execution. We compare our method to a reinforcement learning baseline and other methods that do not explicitly plan contact interactions, including on a hardware screwdriver-turning task where we show that recovering using our method improves task performance by 96% and that ours is the only method evaluated that can attempt recovery without causing catastrophic task failure. Videos can be found at https://dtourrecovery.github.io/.
Periodic 3D reconstruction is essential for crop monitoring, but costly when each cycle restarts from scratch, wasting resources and ignoring information from previous captures. We propose temporal-prior-guided view planning for periodic plant reconstruction, in which a previously reconstructed model of the same plant is non-rigidly aligned to a new partial observation to form an approximation of the current geometry. To accommodate plant growth, we inflate this approximation and solve a set covering optimization problem to compute a minimal set of views. We integrated this method into a complete pipeline that acquires one additional next-best view before registration for robustness and then plans a globally shortest path to connect the planned set of views and outputs the best view sequence. Experiments on maize and tomato under hemisphere and sphere view spaces show that our system maintains or improves surface coverage while requiring fewer views and comparable movement cost compared to state-of-the-art baselines.
This study develops and analyzes an optimization model of smart contract adoption under bounded risk, linking structural theory with simulation and real-world validation. We examine how adoption intensity alpha is structurally pinned at a boundary solution, invariant to variance and heterogeneity, while profitability and service outcomes are variance-fragile, eroding under volatility and heavy-tailed demand. A sharp threshold in the fixed cost parameter A3 triggers discontinuous adoption collapse (H1), variance shocks reduce profits monotonically but not adoption (H2), and additional results on readiness heterogeneity (H3), profit-service co-benefits (H4), and distributional robustness (H5) confirm the duality between stable adoption and fragile payoffs. External validity checks further establish convergence of sample average approximation at the canonical O(1/sqrt(N)) rate (H6). Empirical validation using S&P 500 returns and the MovieLens100K dataset corroborates the theoretical structure: bounded and heavy-tailed distributions fit better than Gaussian models, and profits diverge across volatility regimes even as adoption remains stable. Taken together, the results demonstrate that adoption choices are robust to uncertainty, but their financial consequences are highly fragile. For operations and finance, this duality underscores the need for risk-adjusted performance evaluation, option-theoretic modeling, and distributional stress testing in strategic investment and supply chain design.
Compliance with maritime traffic rules is essential for the safe operation of autonomous vessels, yet training reinforcement learning (RL) agents to adhere to them is challenging. The behavior of RL agents is shaped by the training scenarios they encounter, but creating scenarios that capture the complexity of maritime navigation is non-trivial, and real-world data alone is insufficient. To address this, we propose a falsification-driven RL approach that generates adversarial training scenarios in which the vessel under test violates maritime traffic rules, which are expressed as signal temporal logic specifications. Our experiments on open-sea navigation with two vessels demonstrate that the proposed approach provides more relevant training scenarios and achieves more consistent rule compliance.
Realistic traffic simulation is critical for the development of autonomous driving systems and urban mobility planning, yet existing imitation learning approaches often fail to model realistic traffic behaviors. Behavior cloning suffers from covariate shift, while Generative Adversarial Imitation Learning (GAIL) is notoriously unstable in multi-agent settings. We identify a key source of this instability: irrelevant interaction misguidance, where a discriminator penalizes an ego vehicle's realistic behavior due to unrealistic interactions among its neighbors. To address this, we propose Decomposed Multi-agent GAIL (DecompGAIL), which explicitly decomposes realism into ego-map and ego-neighbor components, filtering out misleading neighbor: neighbor and neighbor: map interactions. We further introduce a social PPO objective that augments ego rewards with distance-weighted neighborhood rewards, encouraging overall realism across agents. Integrated into a lightweight SMART-based backbone, DecompGAIL achieves state-of-the-art performance on the WOMD Sim Agents 2025 benchmark.
Effective path planning is fundamental to the coordination of unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) systems, particularly in applications such as surveillance, navigation, and emergency response. Combining UAVs' broad field of view with UGVs' ground-level operational capability greatly improve the likelihood of successfully achieving task objectives such as locating victims, monitoring target areas, or navigating hazardous terrain. In complex environments, UAVs need to provide precise environmental perception information for UGVs to optimize their routing policy. However, due to severe interference and non-line-of-sight conditions, wireless communication is often unstable in such complex environments, making it difficult to support timely and accurate path planning for UAV-UGV coordination. To this end, this paper proposes a semantic communication (SemCom) framework to enhance UAV/UGV cooperative path planning under unreliable wireless conditions. Unlike traditional methods that transmit raw data, SemCom transmits only the key information for path planning, reducing transmission volume without sacrificing accuracy. The proposed framework is developed by defining key semantics for path planning and designing a transceiver for meeting the requirements of UAV-UGV cooperative path planning. Simulation results show that, compared to conventional SemCom transceivers, the proposed transceiver significantly reduces data transmission volume while maintaining path planning accuracy, thereby enhancing system collaboration efficiency.
Lung infections, particularly pneumonia, pose serious health risks that can escalate rapidly, especially during pandemics. Accurate AI-based severity prediction from medical imaging is essential to support timely clinical decisions and optimize patient outcomes. In this work, we present a novel method applicable to both CT scans and chest X-rays for assessing lung infection severity. Our contributions are twofold: (i) QCross-Att-PVT, a Transformer-based architecture that integrates parallel encoders, a cross-gated attention mechanism, and a feature aggregator to capture rich multi-scale features; and (ii) Conditional Online TransMix, a custom data augmentation strategy designed to address dataset imbalance by generating mixed-label image patches during training. Evaluated on two benchmark datasets, RALO CXR and Per-COVID-19 CT, our method consistently outperforms several state-of-the-art deep learning models. The results emphasize the critical role of data augmentation and gated attention in improving both robustness and predictive accuracy. This approach offers a reliable, adaptable tool to support clinical diagnosis, disease monitoring, and personalized treatment planning. The source code of this work is available at https://github.com/bouthainas/QCross-Att-PVT.
Automating the end-to-end scientific research process poses a fundamental challenge: it requires both evolving high-level plans that are novel and sound, and executing these plans correctly amidst dynamic and uncertain conditions. To address this bilevel challenge, we propose a novel Double-Loop Multi-Agent (DLMA) framework to solve the given research problem automatically. The leader loop, composed of professor agents, is responsible for evolving research plans. It employs an evolutionary algorithm through involvement, improvement, and integration meetings to iteratively generate and refine a pool of research proposals, exploring the solution space effectively. The follower loop, composed of doctoral student agents, is responsible for executing the best-evolved plan. It dynamically adjusts the plan during implementation via pre-hoc and post-hoc meetings, ensuring each step (e.g., drafting, coding) is well-supported by contextual and external observations. Extensive experiments on benchmarks like ACLAward and Laboratory show that DLMA generates research papers that achieve state-of-the-art scores in automated evaluation, significantly outperforming strong baselines. Ablation studies confirm the critical roles of both loops, with evolution driving novelty and execution ensuring soundness.
Optimal transport (OT) and Schr{\"o}dinger bridge (SB) problems have emerged as powerful frameworks for transferring probability distributions with minimal cost. However, existing approaches typically focus on endpoint matching while neglecting critical path-dependent properties -- particularly collision avoidance in multiagent systems -- which limits their practical applicability in robotics, economics, and other domains where inter-agent interactions are essential. Moreover, traditional density control methods often rely on independence assumptions that fail to capture swarm dynamics. We propose a novel framework that addresses these limitations by employing flow matching as the core modeling tool, where the flow model co-evolves with the control policy. Unlike prior methods that treat transport trajectories as mere interpolations between source and target distributions, our approach explicitly optimizes over the entire transport path, enabling the incorporation of trajectory-dependent costs and collision avoidance constraints. Our framework bridges optimal transport theory with mean field control, providing a principled approach to multiagent coordination problems where both endpoint alignment and path properties are critical. Experimental results demonstrate that our method successfully generates collision-free transport plans while maintaining computational efficiency comparable to standard flow matching approaches.
Contemporary language models are fluent yet routinely mis-handle the types of meaning their outputs entail. We argue that hallucination, brittle moderation, and opaque compliance outcomes are symptoms of missing type-theoretic semantics rather than data or scale limitations. Building on Montague's view of language as typed, compositional algebra, we recast alignment as a parsing problem: natural-language inputs must be compiled into structures that make explicit their descriptive, normative, and legal dimensions under context. We present Savassan, a neuro-symbolic architecture that compiles utterances into Montague-style logical forms and maps them to typed ontologies extended with deontic operators and jurisdictional contexts. Neural components extract candidate structures from unstructured inputs; symbolic components perform type checking, constraint reasoning, and cross-jurisdiction mapping to produce compliance-aware guidance rather than binary censorship. In cross-border scenarios, the system "parses once" (e.g., defect claim(product x, company y)) and projects the result into multiple legal ontologies (e.g., defamation risk in KR/JP, protected opinion in US, GDPR checks in EU), composing outcomes into a single, explainable decision. This paper contributes: (i) a diagnosis of hallucination as a type error; (ii) a formal Montague-ontology bridge for business/legal reasoning; and (iii) a production-oriented design that embeds typed interfaces across the pipeline. We outline an evaluation plan using legal reasoning benchmarks and synthetic multi-jurisdiction suites. Our position is that trustworthy autonomy requires compositional typing of meaning, enabling systems to reason about what is described, what is prescribed, and what incurs liability within a unified algebra of meaning.
The CosmoCube project plans to deploy a global 21-cm spectrometer with 10-100 MHz observation band in a lunar orbit. The farside part of such an orbit, i.e. the part of orbit behind the Moon, offers an ideal site for accurately measuring the 21-cm signal from the Dark Ages, Cosmic Dawn and Epoch of Reionization, as the effects of the Earth's ionosphere, artificial radio frequency interference (RFI), and complex terrain and soil are all avoided. Given the limitations of a satellite platform, we propose a receiver calibration system design based on a Radio Frequency system-on-chip, consisting of a Vector Network Analyzer (VNA) sub-system, and a source switching sub-system. We introduce the measurement principle of the VNA, and discuss the effect of quantization error. The accuracy, stability and trajectory noise of the VNA are tested in laboratory experiments. We also present the design of the source-switching sub-system, generating mock datasets, showing that the imperfect return loss, insertion loss, and isolation of surface-mounted microwave switches have a minimal effect on the sky foreground fitting residuals, which are within $\pm10$ mK under optimal fitting condition. When all possible measurement errors in reflection coefficients and physical temperatures are taken into account, the foreground fitting residuals for the 50-90 MHz part of the spectrum remain around $\pm20$ mK.
The paper explores the transformation of port logistics operations with artificial intelligence during the port transformation into a smart port. The research integrates capabilities-based resource analysis and dynamic capabilities with sociotechnicalimplementations of technologies and resilience approaches of complex systems under disruptions. The system applies robustdata infrastructures to propel analytical and AI modules that become effective once integrated with sufficient governance systems and trained personnel and operational processes to transform planning and safety and sustainability operations.It applies Scopus bibliometric research to analyze 123 articles using a systematic approach with both a search protocol and a document screening and duplication verification. It incorporates annual behavior and distribution of author and country performance analysis with science mapping techniques that explore keyword relation and co-citation and bibliographic coupling and conceptual structuring tools that construct thematic maps and multiple correspondence analysis with community detection while applying explicit thresholding and robust tests.The research connects AI applications to smart port domains through specific data-to-impact pathways while providing a method for bibliometric analysis that enables future updates. The research presents a step-by-step approach for data readiness followed by predictive and optimization implementation and organizational integration. The paper supports public policy through recommendations for data sharing standards and complete environmental benefit assessments. The research proposes a future study plan whichcombines field-based testing with multiple port assessments to enhance both cause-effect understanding and research applicability.
Protein sequence alignment is a cornerstone of bioinformatics, traditionally approached using dynamic programming (DP) algorithms that find an optimal sequential path. This paper introduces UniOTalign, a novel framework that recasts alignment from a fundamentally different perspective: global matching via Optimal Transport (OT). Instead of finding a path, UniOTalign computes an optimal flow or transport plan between two proteins, which are represented as distributions of residues in a high-dimensional feature space. We leverage pre-trained Protein Language Models (PLMs) to generate rich, context-aware embeddings for each residue. The core of our method is the Fused Unbalanced Gromov-Wasserstein (FUGW) distance, which finds a correspondence that simultaneously minimizes feature dissimilarity and preserves the internal geometric structure of the sequences. This approach naturally handles sequences of different lengths and is particularly powerful for aligning proteins with nonsequential similarities, such as domain shuffling or circular permutations, which are challenging for traditional DP methods. UniOTalign therefore offers a new, mathematically principled, global matching paradigm for protein alignment, moving beyond the limitations of path-finding algorithms.
Safe navigation in uncertain environments requires planning methods that integrate risk aversion with active perception. In this work, we present a unified framework that refines a coarse reference path by constructing tail-sensitive risk maps from Average Value-at-Risk statistics on an online-updated 3D Gaussian-splat Radiance Field. These maps enable the generation of locally safe and feasible trajectories. In parallel, we formulate Next-Best-View (NBV) selection as an optimization problem on the SE(3) pose manifold, where Riemannian gradient descent maximizes an expected information gain objective to reduce uncertainty most critical for imminent motion. Our approach advances the state-of-the-art by coupling risk-averse path refinement with NBV planning, while introducing scalable gradient decompositions that support efficient online updates in complex environments. We demonstrate the effectiveness of the proposed framework through extensive computational studies.
Load forecasting has always been a challenge for grid operators due to the growing complexity of power systems. The increase in extreme weather and the need for energy from customers has led to load forecasting sometimes failing. This research presents a Support Vector Regression (SVR) framework for electric load forecasting that outperforms the industry standard. The SVR model demonstrates better accuracy across all evaluation metrics that are important for power system operations. The model has a 54.2\% reduction in Mean Squared Error (31.91 vs. 69.63), a 33.5\% improvement in Mean Absolute Error, and performance benefits across other metrics. These improvements show significant benefits when integrated with power forecasting tools and show that the approach provides an additional tool for accuracy checking for system planning and resource allocation in times of need for resource allocation in electric power systems.
This work investigates the reasoning and planning capabilities of foundation models and their scalability in complex, dynamic environments. We introduce PuzzlePlex, a benchmark designed to assess these capabilities through a diverse set of puzzles. PuzzlePlex consists of 15 types of puzzles, including deterministic and stochastic games of varying difficulty, as well as single-player and two-player scenarios. The PuzzlePlex framework provides a comprehensive environment for each game, and supports extensibility to generate more challenging instances as foundation models evolve. Additionally, we implement customized game-playing strategies for comparison. Building on this benchmark, we develop fine-grained metrics to measure performance and conduct an in-depth analysis of frontier foundation models across two settings: instruction-based and code-based. Furthermore, we systematically investigate their scaling limits. Our findings show that reasoning models outperform others in instruction-based settings, while code-based execution presents greater challenges but offers a scalable and efficient alternative. PuzzlePlex enables targeted evaluation and guides future improvements in reasoning, planning, and generalization for foundation models.
Understanding and modeling human mobility is central to challenges in transport planning, sustainable urban design, and public health. Despite decades of effort, simulating individual mobility remains challenging because of its complex, context-dependent, and exploratory nature. Here, we present MobilityGen, a deep generative model that produces realistic mobility trajectories spanning days to weeks at large spatial scales. By linking behavioral attributes with environmental context, MobilityGen reproduces key patterns such as scaling laws for location visits, activity time allocation, and the coupled evolution of travel mode and destination choices. It reflects spatio-temporal variability and generates diverse, plausible, and novel mobility patterns consistent with the built environment. Beyond standard validation, MobilityGen yields insights not attainable with earlier models, including how access to urban space varies across travel modes and how co-presence dynamics shape social exposure and segregation. Our work establishes a new framework for mobility simulation, paving the way for fine-grained, data-driven studies of human behavior and its societal implications.
Existing decentralized methods for multi-agent motion planning lack formal, infinite-horizon safety guarantees, especially for communication-constrained systems. We present R3R, to our knowledge the first decentralized and asynchronous framework for multi-agent motion planning under distance-based communication constraints with infinite-horizon safety guarantees for systems of nonlinear agents. R3R's novelty lies in combining our gatekeeper safety framework with a geometric constraint called R-Boundedness, which together establish a formal link between an agent's communication radius and its ability to plan safely. We constrain trajectories to within a fixed planning radius that is a function of the agent's communication radius, which enables trajectories to be shown provably safe for all time, using only local information. Our algorithm is fully asynchronous, and ensures the forward invariance of these guarantees even in time-varying networks where agents asynchronously join, leave, and replan. We validate our approach in simulations of up to 128 Dubins vehicles, demonstrating 100% safety in dense, obstacle rich scenarios. Our results demonstrate that R3R's performance scales with agent density rather than problem size, providing a practical solution for scalable and provably safe multi-agent systems.
We investigate the impact of missing input data on the construction of second-generation Time Delay Interferometry (TDI) variables, which enable data analysis for the Laser Interferometer Space Antenna (LISA). TDI relies on the introduction of precise time delays into the raw interferometric data streams before they are combined to suppress otherwise dominant laser phase noise. We show that a single missing sample, corresponding to 0.25 s of data, will result in an effective data gap of approximately 90 s in the second-generation TDI output if further measures are not taken. This additional gap is largely independent of the initial gap duration, but increases linearly with the order of the fractional-delay filter used for the computations. For a realistic gap scenario, incorporating both planned and unplanned data interruptions consistent with a target duty cycle of ~84%, we find that frequent, short-duration gaps (e.g., a total of 1000 per year, each of which have short durations ~ 100 s) could result in an additional loss in the TDI variables of about one day per year corresponding to a ~0.3% reduction in duty cycle. This amounts to a loss of approximately one day of LISA data suitable for the global-fit per year.
Planning safe trajectories under model uncertainty is a fundamental challenge. Robust planning ensures safety by considering worst-case realizations, yet ignores uncertainty reduction and leads to overly conservative behavior. Actively reducing uncertainty on-the-fly during a nominal mission defines the dual control problem. Most approaches address this by adding a weighted exploration term to the cost, tuned to trade off the nominal objective and uncertainty reduction, but without formal consideration of when exploration is beneficial. Moreover, safety is enforced in some methods but not in others. We propose a framework that integrates robust planning with active exploration under formal guarantees as follows: The key innovation and contribution is that exploration is pursued only when it provides a verifiable improvement without compromising safety. To achieve this, we utilize our earlier work on gatekeeper as an architecture for safety verification, and extend it so that it generates both safe and informative trajectories that reduce uncertainty and the cost of the mission, or keep it within a user-defined budget. The methodology is evaluated via simulation case studies on the online dual control of a quadrotor under parametric uncertainty.
Recent advances in generative models have sparked exciting new possibilities in the field of autonomous vehicles. Specifically, video generation models are now being explored as controllable virtual testing environments. Simultaneously, end-to-end (E2E) driving models have emerged as a streamlined alternative to conventional modular autonomous driving systems, gaining popularity for their simplicity and scalability. However, the application of these techniques to simulation and planning raises important questions. First, while video generation models can generate increasingly realistic videos, can these videos faithfully adhere to the specified conditions and be realistic enough for E2E autonomous planner evaluation? Second, given that data is crucial for understanding and controlling E2E planners, how can we gain deeper insights into their biases and improve their ability to generalize to out-of-distribution scenarios? In this work, we bridge the gap between the driving models and generative world models (Drive&Gen) to address these questions. We propose novel statistical measures leveraging E2E drivers to evaluate the realism of generated videos. By exploiting the controllability of the video generation model, we conduct targeted experiments to investigate distribution gaps affecting E2E planner performance. Finally, we show that synthetic data produced by the video generation model offers a cost-effective alternative to real-world data collection. This synthetic data effectively improves E2E model generalization beyond existing Operational Design Domains, facilitating the expansion of autonomous vehicle services into new operational contexts.
Post-merger integration states unique challenges for professionals responsible for information system integration aimed on alignment and combination diverse system architectures of merging organizations. Although the theoretical and practical guidance exists for post-merger integration on the business level, there is a significant gap in training for information system integration in this context. In prior research specific methods AMILI (Support method for informed decision identification) and AMILP (Support method for informed decision-making) were introduced for the support of information system integration decisions in the post-merger integration. But during the practical application was reported high learning curve and low learner motivation. This paper explores how game-based learning design can address these limitations by transforming static method training into engaging learning experience. The study analyzes foundational learning theories, cognitive load and motivation models, and serious game design frameworks to identify the essential requirements for a game-based learning design framework tailored to information system integration in post-merger integration. Requirements are structured in two components: the transformation process and resulting learning experience. The paper concludes with a plan for developing and evaluating the proposed framework through iterative design and real-world validation.
Robotic pollination offers a promising alternative to manual labor and bumblebee-assisted methods in controlled agriculture, where wind-driven pollination is absent and regulatory restrictions limit the use of commercial pollinators. In this work, we present and validate a vision-guided robotic framework that uses data from an end-effector mounted RGB-D sensor and combines 3D plant reconstruction, targeted grasp planning, and physics-based vibration modeling to enable precise pollination. First, the plant is reconstructed in 3D and registered to the robot coordinate frame to identify obstacle-free grasp poses along the main stem. Second, a discrete elastic rod model predicts the relationship between actuation parameters and flower dynamics, guiding the selection of optimal pollination strategies. Finally, a manipulator with soft grippers grasps the stem and applies controlled vibrations to induce pollen release. End-to-end experiments demonstrate a 92.5\% main-stem grasping success rate, and simulation-guided optimization of vibration parameters further validates the feasibility of our approach, ensuring that the robot can safely and effectively perform pollination without damaging the flower. To our knowledge, this is the first robotic system to jointly integrate vision-based grasping and vibration modeling for automated precision pollination.
Vision-based Proximal Policy Optimization (PPO) struggles with visual observation-based robotic laparoscopic surgical tasks due to the high-dimensional nature of visual input, the sparsity of rewards in surgical environments, and the difficulty of extracting task-relevant features from raw visual data. We introduce a simple approach integrating MedFlamingo, a medical domain-specific Vision-Language Model, with PPO. Our method is evaluated on five diverse laparoscopic surgery task environments in LapGym, using only endoscopic visual observations. MedFlamingo PPO outperforms and converges faster compared to both standard vision-based PPO and OpenFlamingo PPO baselines, achieving task success rates exceeding 70% across all environments, with improvements ranging from 66.67% to 1114.29% compared to baseline. By processing task observations and instructions once per episode to generate high-level planning tokens, our method efficiently combines medical expertise with real-time visual feedback. Our results highlight the value of specialized medical knowledge in robotic surgical planning and decision-making.
Planetary exploration missions require robust locomotion systems capable of operating in extreme environments over extended periods. This paper presents the DISTANT (Distant Transmission and Steering Systems) design, a novel approach for relocating rover traction and steering actuators from wheel-mounted positions to a thermally protected warm box within the rover body. The design addresses critical challenges in long-distance traversal missions by protecting sensitive components from thermal cycling, dust contamination, and mechanical wear. A double wishbone suspension configuration with cardan joints and capstan drive steering has been selected as the optimal architecture following comprehensive trade-off analysis. The system enables independent wheel traction, steering control, and suspension management whilst maintaining all motorisation within the protected environment. The design meets a 50 km traverse requirement without performance degradation, with integrated dust protection mechanisms and thermal management solutions. Testing and validation activities are planned for Q1 2026 following breadboard manufacturing at 1:3 scale.
Phylogenetic trees represent certain species and their likely ancestors. In such a tree, present-day species are leaves and an edge from u to v indicates that u is an ancestor of v. Weights on these edges indicate the phylogenetic distance. The phylogenetic diversity (PD) of a set of species A is the total weight of edges that are on any path between the root of the phylogenetic tree and a species in A. Selecting a small set of species that maximizes phylogenetic diversity for a given phylogenetic tree is an essential task in preservation planning, where limited resources naturally prevent saving all species. An optimal solution can be found with a greedy algorithm [Steel, Systematic Biology, 2005; Pardi and Goldman, PLoS Genetics, 2005]. However, when a food web representing predator-prey relationships is given, finding a set of species that optimizes phylogenetic diversity subject to the condition that each saved species should be able to find food among the preserved species is NP-hard [Spillner et al., IEEE/ACM, 2008]. We present a generalization of this problem, where, inspired by biological considerations, the food web has weighted edges to represent the importance of predator-prey relationships. We show that this version is NP-hard even when both structures, the food web and the phylogenetic tree, are stars. To cope with this intractability, we proceed in two directions. Firstly, we study special cases where a species can only survive if a given fraction of its prey is preserved. Secondly, we analyze these problems through the lens of parameterized complexity. Our results include that finding a solution is fixed-parameter tractable with respect to the vertex cover number of the food web, assuming the phylogenetic tree is a star.