India is a founder-member country to participate in the construction of the international multipurpose accelerator facility called the Facility for Antiproton and Ion Research (FAIR) at Darmstadt, Germany. Bose Institute, Kolkata, has been designated as the Indian shareholder of the FAIR GmbH and the nodal Indian Institution for co-ordinating Indian participation in the FAIR programme. Indian participation in FAIR is twofold. Firstly, the advancement of knowledge in nuclear astrophysics and reaction, high-energy nuclear physics, atomic \& plasma physics and application through the participation of Indian researchers, engineers and students in various experiments planned at FAIR. In addition to this, India is also contributing high-tech accelerator equipment as in-kind contribution to FAIR. Our active involvement include the designing, manufacturing and supply of in-kind accelerator items e.g. power converters, vacuum chamber, beam catchers, IT diagnostic cables among them and coordinating the participation of Indian scientists in the FAIR experiments including detector development, physics simulation, experimental data analysis. Indian researchers have been participating in the two major experiments at FAIR, i.e. Nuclear Structure, Astrophysics and Reactions (NUSTAR) and Compressed Baryonic Matter (CBM) and in particular Bose Institute is involved in the CBM experiment, to study and characterize the matter created in the relativistic nucleus-nucleus collisions at high net baryon density and relatively moderate temperature. In this article a brief overview on the FAIR facility, the experiments at FAIR and Indian participation are presented.
This paper reviews discounting approaches for modeling multi-year energy investments, focusing on total versus annualised cost formulations. We discuss how time value of money is handled, and how salvage value and milestone-year weighting can address mismatches between asset lifetimes and model horizons. These methods are implemented in the open-source TulipaEnergyModel to support transparent and tractable long-term energy system planning.
This article proposes a new path planning method for addressing multi-level terrain situations. The proposed method includes innovations in three aspects: 1) the pre-processing of point cloud maps with a multi-level skip-list structure and data-slimming algorithm for well-organized and simplified map formalization and management, 2) the direct acquisition of local traversability indexes through vehicle and point cloud interaction analysis, which saves work in surface fitting, and 3) the assignment of traversability indexes on a multi-level connectivity graph to generate a weighted traversability graph for generally search-based path planning. The A* algorithm is modified to utilize the traversability graph to generate a short and safe path. The effectiveness and reliability of the proposed method are verified through indoor and outdoor experiments conducted in various environments, including multi-floor buildings, woodland, and rugged mountainous regions. The results demonstrate that the proposed method can properly address 3D path planning problems for ground vehicles in a wide range of situations.
This study examines the psychological impact of energy crises on households, utilising the Perceived Stress Scale-10 (PSS-10) to measure the stress induced by disruptions in electricity, gas, and fuel supply and pricing. Through a multivariate analysis incorporating Ordinary Least Squares (OLS) regression, Simultaneous-Quantile Regressions (SQR), Random Forest (RF) and Ordered Probit models, the research identifies the key socio-demographic and environmental factors influencing household stress. Our findings reveal that urban residency, low-income households, older individuals, and those with low environmental awareness are particularly vulnerable to stress during energy crises. Regional disparities and attitudes towards nuclear and renewable energy also significantly shape stress responses. The study emphasises the need for psychologically-informed energy policy, advocating for the inclusion of stress metrics in energy planning to enhance resilience and address the multi-dimensional nature of energy insecurity. This research contributes a novel, human-centric perspective to energy policy, urging policymakers to integrate psychosocial resilience alongside traditional technical and economic considerations in the design of energy interventions.
Cloud computing has become a pivotal platform for executing scientific workflows due to its scalable and cost-effective infrastructure. Scientific Cloud Service Providers (SCSPs) act as intermediaries that rent virtual machines (VMs) from Infrastructure-as-a-Service (IaaS) providers to meet users' workflow execution demands. The SCSP earns profit from the execution of scientific workflows if it completes the execution of the workflow before the specified deadline of the workflow. This paper addresses two key challenges that impact the profitability of SCSPs: the cold start problem and the efficient management of diverse VM pricing models, namely reserved, on-demand, and spot instances. We propose a hybrid scheduling framework that integrates initial planning based on historical data with real-time adaptations informed by actual workload variations. In the initial phase, VMs are provisioned using reserved pricing based on predicted workloads and spot instances. During execution, the system dynamically adjusts by provisioning additional VMs through on-demand or spot instances to accommodate unexpected bursts in task arrivals. Our framework also incorporates a dependency-aware task scheduling strategy that accounts for cold start delays and spot pricing volatility. Experimental results on real-world benchmark datasets demonstrate that our approach outperforms state-of-the-art methods, achieving up to 20% improvement over cold-start-focused techniques and 15% over pricing-model-based VM provisioning strategies.
Precise manipulation tasks require accurate knowledge of payload inertial parameters. Unfortunately, identifying these parameters for unknown payloads while ensuring that the robotic system satisfies its input and state constraints while avoiding collisions with the environment remains a significant challenge. This paper presents an integrated framework that enables robotic manipulators to safely and automatically identify payload parameters while maintaining operational safety guarantees. The framework consists of two synergistic components: an online trajectory planning and control framework that generates provably-safe exciting trajectories for system identification that can be tracked while respecting robot constraints and avoiding obstacles and a robust system identification method that computes rigorous overapproximative bounds on end-effector inertial parameters assuming bounded sensor noise. Experimental validation on a robotic manipulator performing challenging tasks with various unknown payloads demonstrates the framework's effectiveness in establishing accurate parameter bounds while maintaining safety throughout the identification process. The code is available at our project webpage: https://roahmlab.github.io/OnlineSafeSysID/.
Continuous Software Engineering (CSE) is widely adopted in the industry, integrating practices such as Continuous Integration and Continuous Deployment (CI/CD). Beyond technical aspects, CSE also encompasses business activities like continuous planning, budgeting, and operational processes. Coordinating these activities in large-scale product development involves multiple stakeholders, increasing complexity. This study aims to address this complexity by identifying and analyzing critical dependencies in large-scale CSE. Based on 17 semi-structured interviews conducted at two Nordic fintech companies, our preliminary findings indicate that dependencies between software teams and support functions, as well as between software teams and external entities, are the primary sources of delays and bottlenecks. As a next step, we plan to further refine our understanding of critical dependencies in large-scale CSE and explore coordination mechanisms that can better support software development teams in managing these challenges.
We introduce Phi-4-reasoning, a 14-billion parameter reasoning model that achieves strong performance on complex reasoning tasks. Trained via supervised fine-tuning of Phi-4 on carefully curated set of "teachable" prompts-selected for the right level of complexity and diversity-and reasoning demonstrations generated using o3-mini, Phi-4-reasoning generates detailed reasoning chains that effectively leverage inference-time compute. We further develop Phi-4-reasoning-plus, a variant enhanced through a short phase of outcome-based reinforcement learning that offers higher performance by generating longer reasoning traces. Across a wide range of reasoning tasks, both models outperform significantly larger open-weight models such as DeepSeek-R1-Distill-Llama-70B model and approach the performance levels of full DeepSeek-R1 model. Our comprehensive evaluations span benchmarks in math and scientific reasoning, coding, algorithmic problem solving, planning, and spatial understanding. Interestingly, we observe a non-trivial transfer of improvements to general-purpose benchmarks as well. In this report, we provide insights into our training data, our training methodologies, and our evaluations. We show that the benefit of careful data curation for supervised fine-tuning (SFT) extends to reasoning language models, and can be further amplified by reinforcement learning (RL). Finally, our evaluation points to opportunities for improving how we assess the performance and robustness of reasoning models.
Alzheimer's Disease (AD) is marked by significant inter-individual variability in its progression, complicating accurate prognosis and personalized care planning. This heterogeneity underscores the critical need for predictive models capable of forecasting patient-specific disease trajectories. Artificial Intelligence (AI) offers powerful tools to address this challenge by analyzing complex, multi-modal, and longitudinal patient data. This paper provides a comprehensive survey of AI methodologies applied to personalized AD progression prediction. We review key approaches including state-space models for capturing temporal dynamics, deep learning techniques like Recurrent Neural Networks for sequence modeling, Graph Neural Networks (GNNs) for leveraging network structures, and the emerging concept of AI-driven digital twins for individualized simulation. Recognizing that data limitations often impede progress, we examine common challenges such as high dimensionality, missing data, and dataset imbalance. We further discuss AI-driven mitigation strategies, with a specific focus on synthetic data generation using Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) to augment and balance datasets. The survey synthesizes the strengths and limitations of current approaches, emphasizing the trend towards multimodal integration and the persistent need for model interpretability and generalizability. Finally, we identify critical open challenges, including robust external validation, clinical integration, and ethical considerations, and outline promising future research directions such as hybrid models, causal inference, and federated learning. This review aims to consolidate current knowledge and guide future efforts in developing clinically relevant AI tools for personalized AD prognostication.
While most heuristics studied in heuristic search depend only on the state, some accumulate information during search and thus also depend on the search history. Various existing approaches use such dynamic heuristics in $\mathrm{A}^*$-like algorithms and appeal to classic results for $\mathrm{A}^*$ to show optimality. However, doing so ignores the complexities of searching with a mutable heuristic. In this paper we formalize the idea of dynamic heuristics and use them in a generic algorithm framework. We study a particular instantiation that models $\mathrm{A}^*$ with dynamic heuristics and show general optimality results. Finally we show how existing approaches from classical planning can be viewed as special cases of this instantiation, making it possible to directly apply our optimality results.
Efficient mission planning for cooperative systems involving Unmanned Aerial Vehicles (UAVs) and Unmanned Ground Vehicles (UGVs) requires addressing energy constraints, scalability, and coordination challenges between agents. UAVs excel in rapidly covering large areas but are constrained by limited battery life, while UGVs, with their extended operational range and capability to serve as mobile recharging stations, are hindered by slower speeds. This heterogeneity makes coordination between UAVs and UGVs critical for achieving optimal mission outcomes. In this work, we propose a scalable deep reinforcement learning (DRL) framework to address the energy-constrained cooperative routing problem for multi-agent UAV-UGV teams, aiming to visit a set of task points in minimal time with UAVs relying on UGVs for recharging during the mission. The framework incorporates sortie-wise agent switching to efficiently manage multiple agents, by allocating task points and coordinating actions. Using an encoder-decoder transformer architecture, it optimizes routes and recharging rendezvous for the UAV-UGV team in the task scenario. Extensive computational experiments demonstrate the framework's superior performance over heuristic methods and a DRL baseline, delivering significant improvements in solution quality and runtime efficiency across diverse scenarios. Generalization studies validate its robustness, while dynamic scenario highlights its adaptability to real-time changes with a case study. This work advances UAV-UGV cooperative routing by providing a scalable, efficient, and robust solution for multi-agent mission planning.
We study a variant of LTLf synthesis that synthesizes adaptive strategies for achieving a multi-tier goal, consisting of multiple increasingly challenging LTLf objectives in nondeterministic planning domains. Adaptive strategies are strategies that at any point of their execution (i) enforce the satisfaction of as many objectives as possible in the multi-tier goal, and (ii) exploit possible cooperation from the environment to satisfy as many as possible of the remaining ones. This happens dynamically: if the environment cooperates (ii) and an objective becomes enforceable (i), then our strategies will enforce it. We provide a game-theoretic technique to compute adaptive strategies that is sound and complete. Notably, our technique is polynomial, in fact quadratic, in the number of objectives. In other words, it handles multi-tier goals with only a minor overhead compared to standard LTLf synthesis.
Mechanical search (MS) in cluttered environments remains a significant challenge for autonomous manipulators, requiring long-horizon planning and robust state estimation under occlusions and partial observability. In this work, we introduce XPG-RL, a reinforcement learning framework that enables agents to efficiently perform MS tasks through explainable, priority-guided decision-making based on raw sensory inputs. XPG-RL integrates a task-driven action prioritization mechanism with a learned context-aware switching strategy that dynamically selects from a discrete set of action primitives such as target grasping, occlusion removal, and viewpoint adjustment. Within this strategy, a policy is optimized to output adaptive threshold values that govern the discrete selection among action primitives. The perception module fuses RGB-D inputs with semantic and geometric features to produce a structured scene representation for downstream decision-making. Extensive experiments in both simulation and real-world settings demonstrate that XPG-RL consistently outperforms baseline methods in task success rates and motion efficiency, achieving up to 4.5$\times$ higher efficiency in long-horizon tasks. These results underscore the benefits of integrating domain knowledge with learnable decision-making policies for robust and efficient robotic manipulation.
We propose an opinion-driven navigation framework for multi-robot traversal through a narrow corridor. Our approach leverages a multi-agent decision-making model known as the Nonlinear Opinion Dynamics (NOD) to address the narrow corridor passage problem, formulated as a multi-robot navigation game. By integrating the NOD model with a multi-robot path planning algorithm, we demonstrate that the framework effectively reduces the likelihood of deadlocks during corridor traversal. To ensure scalability with an increasing number of robots, we introduce a game reduction technique that enables efficient coordination in larger groups. Extensive simulation studies are conducted to validate the effectiveness of the proposed approach.
Access to high-quality medical data is often restricted due to privacy concerns, posing significant challenges for training artificial intelligence (AI) algorithms within Electronic Health Record (EHR) applications. In this study, prompt engineering with the GPT-4 API was employed to generate high-quality synthetic datasets aimed at overcoming this limitation. The generated data encompassed a comprehensive array of patient admission information, including healthcare provider details, hospital departments, wards, bed assignments, patient demographics, emergency contacts, vital signs, immunizations, allergies, medical histories, appointments, hospital visits, laboratory tests, diagnoses, treatment plans, medications, clinical notes, visit logs, discharge summaries, and referrals. To ensure data quality and integrity, advanced validation techniques were implemented utilizing models such as BERT's Next Sentence Prediction for sentence coherence, GPT-2 for overall plausibility, RoBERTa for logical consistency, autoencoders for anomaly detection, and conducted diversity analysis. Synthetic data that met all validation criteria were integrated into a comprehensive PostgreSQL database, serving as the data management system for the EHR application. This approach demonstrates that leveraging generative AI models with rigorous validation can effectively produce high-quality synthetic medical data, facilitating the training of AI algorithms while addressing privacy concerns associated with real patient data.
In 2021, the City of Atlanta and Atlanta Police Foundation launched plans to build a large police training facility in the South River Forest in unincorporated DeKalb County, GA. Residents of Atlanta and DeKalb County, environmental activists, police and prison abolitionists, and other activists and concerned individuals formed the movement in opposition to the facility, known as the Stop Cop City / Defend the Atlanta Forest movement. Social media and digital maps became common tools for communicating information about the facility and the movement. Here, we examine online maps about the facility and the opposition movement, originating from grassroots organizations, the City of Atlanta, news media outlets, the Atlanta Police Foundation, and individuals. We gather and examine 32 publicly available maps collected through the Google Search API, Twitter (now X), Instagram and reddit. Using a framework of critical cartography, we conduct a content analysis of these maps to identify the mapping technologies and techniques (data, cartographic elements, styles) used by different stakeholders and roles that maps and mapping technologies can play in social movements. We examine the extent to which these maps provide data to confirm or contradict concerns raised by grassroots organizations and local residents about the facility. We find that stakeholders and mapmakers use geospatial tools in different ways and likely have varied access to mapping technologies. We argue that documenting the use of maps to communicate information about a contentious project can help enumerate community positions and perspectives, and we advocate for accessible mapmaking tools. We conclude by discussing the implications of accessibility of mapping technology and posting maps to social media, and share example map images that extend the geographic information systems (GIS) techniques seen in the retrieved maps.
As urban transit systems transition towards electrification, using renewable energy sources (RES), such as solar, is essential to make them efficient and sustainable. However, the intermittent nature of renewables poses a challenge in deciding the solar panel requirements and battery energy storage system (BESS) capacity at charging locations. To address these challenges, we propose a two-stage stochastic programming model that considers seasonality in solar energy generation while incorporating temperature-based variations in bus energy consumption and dynamic time-of-use electricity prices. Specifically, we formulate the problem as a multi-scenario linear program (LP) where the first-stage long-term variables determine the charging station power capacity, BESS capacity, and the solar panel area at each charging location. The second-stage scenario-specific variables prescribe the energy transferred to buses directly from the grid or the BESS during layovers. We demonstrate the effectiveness of this framework using data from Durham Transit Network (Ontario) and Action Buses (Canberra), where bus schedules and charging locations are determined using a concurrent scheduler-based heuristic. Solar energy data is collected from the National Renewable Energy Laboratory (NREL) database. We solved the multi-scenario LP using Benders' decomposition, which performed better than the dual simplex method, especially when the number of scenarios was high. With solar energy production at the depots, our model estimated a cost savings of 16.48% and 32.00% for the Durham and Canberra networks, respectively. Our results also show that the scenario-based schedule adapts better to seasonal variations than a schedule estimated from average input parameters.
Adaptive User Interfaces (AUI) play a crucial role in modern software applications by dynamically adjusting interface elements to accommodate users' diverse and evolving needs. However, existing adaptation strategies often lack real-time responsiveness. Reinforcement Learning (RL) has emerged as a promising approach for addressing complex, sequential adaptation challenges, enabling adaptive systems to learn optimal policies based on previous adaptation experiences. Although RL has been applied to AUIs,integrating RL agents effectively within user interactions remains a challenge. In this paper, we enhance a RL-based Adaptive User Interface adaption framework by incorporating personalized human feedback directly into the leaning process. Unlike prior approaches that rely on a single pre-trained RL model, our approach trains a unique RL agent for each user, allowing individuals to actively shape their personal RL agent's policy, potentially leading to more personalized and responsive UI adaptations. To evaluate this approach, we conducted an empirical study to assess the impact of integrating human feedback into the RL-based Adaptive User Interface adaption framework and its effect on User Experience (UX). The study involved 33 participants interacting with AUIs incorporating human feedback and non-adaptive user interfaces in two domains: an e-learning platform and a trip-planning application. The results suggest that incorporating human feedback into RL-driven adaptations significantly enhances UX, offering promising directions for advancing adaptive capabilities and user-centered design in AUIs.
Robotic-assisted procedures offer enhanced precision, but while fully autonomous systems are limited in task knowledge, difficulties in modeling unstructured environments, and generalisation abilities, fully manual teleoperated systems also face challenges such as delay, stability, and reduced sensory information. To address these, we developed an interactive control strategy that assists the human operator by predicting their motion plan at both high and low levels. At the high level, a surgeme recognition system is employed through a Transformer-based real-time gesture classification model to dynamically adapt to the operator's actions, while at the low level, a Confidence-based Intention Assimilation Controller adjusts robot actions based on user intent and shared control paradigms. The system is built around a robotic suturing task, supported by sensors that capture the kinematics of the robot and task dynamics. Experiments across users with varying skill levels demonstrated the effectiveness of the proposed approach, showing statistically significant improvements in task completion time and user satisfaction compared to traditional teleoperation.
Proton Computed Tomography (pCT) provides a promising solution to enhance the accuracy of Relative Stopping Power (RSP) required for proton therapy planning. This research introduces a novel high-granularity pCT architecture that incorporates a silicon pixel tracking system and a calorimetric range telescope, which uniquely integrates range telescope functionality with track discrimination capabilities. The Bortfeld function fitting and Convolutional Neural Network (CNN) classifier algorithms are developed and applied for discrimination. In simulation studies, both approaches demonstrate the capability to reduce uncertainty in Water Equivalent Path Length (WEPL) determination for individual proton tracks to below 3~mm. The standard imaging protocol (3.2~mGy, $4\times10^{8}$ protons) achieves sub-millimeter spatial resolution ($\sim$0.5 mm) with sub-1\% RSP accuracy. With proton count requirements reduced by track discrimination, an ultra-low-dose protocol (0.16~mGy, $2\times10^{7}$~protons) is proposed with achieved sub-1\% RSP accuracy and $\sim$1.1~mm spatial resolution in simulation. This low-dose performance significantly expands clinical applicability, particularly for pediatric imaging or frequent imaging scenarios. Furthermore, the target 10 MHz proton detection rate suggests potential for real-time image guidance during radiotherapy. By circumventing the need for ultra-precise energy measurements, this design minimizes hardware complexity and provides a scalable foundation for future pCT systems.
In this paper, a novel quantum classical hybrid framework is proposed that synergizes quantum with Classical Reinforcement Learning. By leveraging the inherent parallelism of quantum computing, the proposed approach generates robust Q tables and specialized turn cost estimations, which are then integrated with a classical Reinforcement Learning pipeline. The Classical Quantum fusion results in rapid convergence of training, reducing the training time significantly and improved adaptability in scenarios featuring static, dynamic, and moving obstacles. Simulator based evaluations demonstrate significant enhancements in path efficiency, trajectory smoothness, and mission success rates, underscoring the potential of framework for real time, autonomous navigation in complex and unpredictable environments. Furthermore, the proposed framework was tested beyond simulations on practical scenarios, including real world map data such as the IIT Delhi campus, reinforcing its potential for real time, autonomous navigation in complex and unpredictable environments.
Making sense of the world and acting in it relies on building simplified mental representations that abstract away aspects of reality. This principle of cognitive mapping is universal to agents with limited resources. Living organisms, people, and algorithms all face the problem of forming functional representations of their world under various computing constraints. In this work, we explore the hypothesis that human resource-efficient planning may arise from representing the world as predictably structured. Building on the metaphor of concepts as programs, we propose that cognitive maps can take the form of generative programs that exploit predictability and redundancy, in contrast to directly encoding spatial layouts. We use a behavioral experiment to show that people who navigate in structured spaces rely on modular planning strategies that align with programmatic map representations. We describe a computational model that predicts human behavior in a variety of structured scenarios. This model infers a small distribution over possible programmatic cognitive maps conditioned on human prior knowledge of the world, and uses this distribution to generate resource-efficient plans. Our models leverages a Large Language Model as an embedding of human priors, implicitly learned through training on a vast corpus of human data. Our model demonstrates improved computational efficiency, requires drastically less memory, and outperforms unstructured planning algorithms with cognitive constraints at predicting human behavior, suggesting that human planning strategies rely on programmatic cognitive maps.
Urban Building Energy Models (UBEM) are vital for enhancing energy efficiency and sustainability in urban planning. However, data scarcity often challenges their validation, particularly the lack of hourly measured data and the variety of building samples. This study addresses this issue by applying bias adjustment techniques from survey research to improve UBEM validation robustness with incomplete measured data. Error estimation tests are conducted using various levels of missingness, and three bias adjustment methods are employed: multivariate imputation, cell weighting and raking weighting. Key findings indicate that using incomplete data in UBEM validation without adjustment is not advisable, while bias adjustment techniques significantly enhance the robustness of validation, providing more reliable model validity estimates. Cell weighting is preferable in this study due to its reliance on joint distributions of auxiliary variables.
In recent years, the field of robotics has witnessed a significant shift from operating in structured environments to handling dynamic and unpredictable settings. To tackle these challenges, methodologies from the field of self-adaptive systems enabling these systems to react to unforeseen circumstances during runtime have been applied. The Monitoring-Analysis- Planning-Execution over Knowledge (MAPE-K) feedback loop model is a popular approach, often implemented in a managing subsystem, responsible for monitoring and adapting a managed subsystem. This work explores the implementation of the MAPE- K feedback loop based on Behavior Trees (BTs) within the Robot Operating System 2 (ROS2) framework. By delineating the managed and managing subsystems, our approach enhances the flexibility and adaptability of ROS-based systems, ensuring they not only meet Quality-of-Service (QoS), but also system health metric requirements, namely availability of ROS nodes and communication channels. Our implementation allows for the application of the method to new managed subsystems without needing custom BT nodes as the desired behavior can be configured within a specific rule set. We demonstrate the effectiveness of our method through various experiments on a system showcasing an aerial perception use case. By evaluating different failure cases, we show both an increased perception quality and a higher system availability. Our code is open source
In response to the capabilities presented by the High-Intensity Heavy Ion Accelerator Facility (HIAF) and the Accelerator-Driven Subcritical System (CiADS), as well as the proposed Chinese Advanced Nuclear Physics Research Facility (CNUF), we are assembling a consortium of experts in relevant disciplines--both domestically and internationally--to delineate high-precision physics experiments that leverage the state-of-the-art research environment afforded by CNUF. Our focus encompasses six primary domains of inquiry: hadron physics--including endeavors such as the super eta factory and investigations into light hadron structures; muon physics; neutrino physics; neutron physics; the testing of fundamental symmetries; and the exploration of quantum effects within nuclear physics, along with the utilization of vortex accelerators. We aim to foster a well-rounded portfolio of large, medium, and small-scale projects, thus unlocking new scientific avenues and optimizing the potential of the Huizhou large scientific facility. The aspiration for international leadership in scientific research will be a guiding principle in our strategic planning. This initiative will serve as a foundational reference for the Institute of Modern Physics in its strategic planning and goal-setting, ensuring alignment with its developmental objectives while striving to secure a competitive edge in technological advancement. Our ambition is to engage in substantive research within these realms of high-precision physics, to pursue groundbreaking discoveries, and to stimulate progress in China's nuclear physics landscape, positioning Huizhou as a preeminent global hub for advanced nuclear physics research.
Reflexion is an AI-powered platform designed to enable structured emotional self-reflection at scale. By integrating real-time emotion detection, layered reflective prompting, and metaphorical storytelling generation, Reflexion empowers users to engage in autonomous emotional exploration beyond basic sentiment categorization. Grounded in theories of expressive writing, cognitive restructuring, self-determination, and critical consciousness development, the system scaffolds a progressive journey from surface-level emotional recognition toward value-aligned action planning. Initial pilot studies with diverse participants demonstrate positive outcomes in emotional articulation, cognitive reframing, and perceived psychological resilience. Reflexion represents a promising direction for scalable, theory-informed affective computing interventions aimed at fostering emotional literacy and psychological growth across educational, therapeutic, and public health contexts.
This thesis presents a unified control framework for agile and fault-tolerant flight of the Multi-Modal Mobility Morphobot (M4) in aerial mode. The M4 robot is capable of transitioning between ground and aerial locomotion. The articulated legs enable more dynamic maneuvers than a standard quadrotor platform. A nonlinear model predictive control (NMPC) approach is developed to simultaneously plan posture manipulation and thrust vectoring actions, allowing the robot to execute sharp turns and dynamic flight trajectories. The framework integrates an agile and fault-tolerant control logic that enables precise tracking under aggressive maneuvers while compensating for actuator failures, ensuring continued operation without significant performance degradation. Simulation results validate the effectiveness of the proposed method, demonstrating accurate trajectory tracking and robust recovery from faults, contributing to resilient autonomous flight in complex environments.
Recent advances in planning have explored using learning methods to help planning. However, little attention has been given to adapting search algorithms to work better with learning systems. In this paper, we introduce partial-space search, a new search space for classical planning that leverages the relational structure of actions given by PDDL action schemas -- a structure overlooked by traditional planning approaches. Partial-space search provides a more granular view of the search space and allows earlier pruning of poor actions compared to state-space search. To guide partial-space search, we introduce action set heuristics that evaluate sets of actions in a state. We describe how to automatically convert existing heuristics into action set heuristics. We also train action set heuristics from scratch using large training datasets from partial-space search. Our new planner, LazyLifted, exploits our better integrated search and learning heuristics and outperforms the state-of-the-art ML-based heuristic on IPC 2023 learning track (LT) benchmarks. We also show the efficiency of LazyLifted on high-branching factor tasks and show that it surpasses LAMA in the combined IPC 2023 LT and high-branching factor benchmarks.
We present a platform for the generation of educational activities oriented to teaching English as a foreign language. The different activities -- games and language practice exercises -- are strongly based on Natural Language Processing techniques. The platform offers the possibility of playing out-of-the-box games, generated from resources created semi-automatically and then manually curated. It can also generate games or exercises of greater complexity from texts entered by teachers, providing a stage of review and edition of the generated content before use. As a way of expanding the variety of activities in the platform, we are currently experimenting with image and text generation. In order to integrate them and improve the performance of other neural tools already integrated, we are working on migrating the platform to a more powerful server. In this paper we describe the development of our platform and its deployment for end users, discussing the challenges faced and how we overcame them, and also detail our future work plans.
Background: Ultra-high-dose-rate (UHDR) radiation therapy has demonstrated promising potential in reducing toxicity to organs-at-risk (OARs). Proton therapy is uniquely positioned to deliver UHDR by leveraging the Bragg peak in conjunction with patient-specific range modulators (PSRMs) to generate a spread-out Bragg peak (SOBP). Existing proton FLASH (pFLASH) planning typically involves (1) generating a multi-energy IMPT plan for spot weights and (2) converting it to single-energy delivery via PSRM optimization. However, the intrinsic coupling between spot weight distribution and PSRM design has not been fully investigated. Purpose: This work proposes Joint Range-Modulator and Spot Optimization (JRSO) that simultaneously optimizes the PSRM and spot weights to improve the plan quality of conformal pFLASH therapy. Methods: Unlike the conventional method, JRSO does not require a one-to-one correspondence between beam spots and PSRM pins. To achieve better plan quality, starting from an initial solution derived from a conventional IMPT plan, JRSO alternatively updates the PSRM design and spot weights. This process progressively refines both parameters while ensuring compliance with practical delivery constraints, such as the minimum monitor-unit (MMU) requirement. Results: JRSO obtained improved plan quality compared to the conventional method. For example, in a head-and-neck (HN) case, JRSO lowered the maximum target dose from 117.6% to 107.1%, improved the conformity index from 0.74 to 0.87, and decreased the region-of-interest (ROI) effective dose from 6.50 Gy to 6.10 Gy. Conclusion: A new optimization method JRSO is proposed for conformal pFLASH radiotherapy. It outperforms the conventional approach and may extend the applicability of PSRM to more complex clinical scenarios, particularly those involving misalignments between beam spots and pins.