Minimizing intermediate results is critical for efficient multi-join query processing. Although the seminal Yannakakis algorithm offers strong guarantees for acyclic queries, cyclic queries remain an open challenge. In this paper, we propose SplitJoin, a framework that introduces split as a first-class query operator. By partitioning input tables into heavy and light parts, SplitJoin allows different data partitions to use distinct query plans, with the goal of reducing intermediate sizes using existing binary join engines. We systematically explore the design space for split-based optimizations, including threshold selection, split strategies, and join ordering after splits. Implemented as a front-end to DuckDB and Umbra, SplitJoin achieves substantial improvements: on DuckDB, SplitJoin completes 43 social network queries (vs. 29 natively), achieving 2.1x faster runtime and 7.9x smaller intermediates on average (up to 13.6x and 74x, respectively); on Umbra, it completes 45 queries (vs. 35), achieving 1.3x speedups and 1.2x smaller intermediates on average (up to 6.1x and 2.1x, respectively).
Multi-Agent Path Finding (MAPF) has gained significant attention, with most research focusing on minimizing collisions and travel time. This paper also considers energy consumption in the path planning of automated guided vehicles (AGVs). It addresses two main challenges: i) resolving collisions between AGVs and ii) assigning tasks to AGVs. We propose a new collision avoidance strategy that takes both energy use and travel time into account. For task assignment, we present two multi-objective algorithms: Non-Dominated Sorting Genetic Algorithm (NSGA) and Adaptive Large Neighborhood Search (ALNS). Comparative evaluations show that these proposed methods perform better than existing approaches in both collision avoidance and task assignment.
We present a dataset generated to investigate urban heat and thermal perception across five neighborhoods in the Barcelona metropolitan area. In collaboration with 14 non-academic partner organizations, we conducted a series of citizen science campaigns involving 439 residents as co-researchers engaged throughout all stages of the research process. Participants, residents of areas classified as highly or very highly climate-vulnerable, identified 210 public outdoor sites relevant to their daily lives. These locations were subsequently characterized using a range of spatial and environmental indicators pertinent to urban heat island effects, urban health, and climate resilience. Over the course of 48 thermal walks, participants carried portable, low-cost sensors that continuously recorded air temperature, relative humidity, and geolocation, resulting in 296,286 processed microclimatic data points. At pre-defined sites, individuals completed standardized surveys to report their Thermal Sensation Votes and Thermal Comfort Votes, yielding 5,169 self-reported entries. Sociodemographic data were also collected to further contextualize participants' responses. The resulting dataset integrates objective environmental measurements with subjective perceptions of heat, enabling point-by-point analysis of thermal experience within the urban fabric. It offers a novel, multi-dimensional resource to support research on heat, thermal inequality, and the experiential dimensions of climate vulnerability, and is intended to inform evidence-based decision-making in urban planning, public health, and climate adaptation.
Long-horizon contact-rich bimanual manipulation presents a significant challenge, requiring complex coordination involving a mixture of parallel execution and sequential collaboration between arms. In this paper, we introduce a hierarchical framework that frames this challenge as an integrated skill planning & scheduling problem, going beyond purely sequential decision-making to support simultaneous skill invocation. Our approach is built upon a library of single-arm and bimanual primitive skills, each trained using Reinforcement Learning (RL) in GPU-accelerated simulation. We then train a Transformer-based planner on a dataset of skill compositions to act as a high-level scheduler, simultaneously predicting the discrete schedule of skills as well as their continuous parameters. We demonstrate that our method achieves higher success rates on complex, contact-rich tasks than end-to-end RL approaches and produces more efficient, coordinated behaviors than traditional sequential-only planners.
In task and motion planning, high-level task planning is done over an abstraction of the world to enable efficient search in long-horizon robotics problems. However, the feasibility of these task-level plans relies on the downward refinability of the abstraction into continuous motion. When a domain's refinability is poor, task-level plans that appear valid may ultimately fail during motion planning, requiring replanning and resulting in slower overall performance. Prior works mitigate this by encoding refinement issues as constraints to prune infeasible task plans. However, these approaches only add constraints upon refinement failure, expending significant search effort on infeasible branches. We propose VIZ-COAST, a method of leveraging the common-sense spatial reasoning of large pretrained Vision-Language Models to identify issues with downward refinement a priori, bypassing the need to fix these failures during planning. Experiments on two challenging TAMP domains show that our approach is able to extract plausible constraints from images and domain descriptions, drastically reducing planning times and, in some cases, eliminating downward refinement failures altogether, generalizing to a diverse range of instances from the broader domain.
Segmentation of liver structures in multi-phase contrast-enhanced computed tomography (CECT) plays a crucial role in computer-aided diagnosis and treatment planning for liver diseases, including tumor detection. In this study, we investigate the performance of UNet-based architectures for liver tumor segmentation, starting from the original UNet and extending to UNet3+ with various backbone networks. We evaluate ResNet, Transformer-based, and State-space (Mamba) backbones, all initialized with pretrained weights. Surprisingly, despite the advances in modern architecture, ResNet-based models consistently outperform Transformer- and Mamba-based alternatives across multiple evaluation metrics. To further improve segmentation quality, we introduce attention mechanisms into the backbone and observe that incorporating the Convolutional Block Attention Module (CBAM) yields the best performance. ResNetUNet3+ with CBAM module not only produced the best overlap metrics with a Dice score of 0.755 and IoU of 0.662, but also achieved the most precise boundary delineation, evidenced by the lowest HD95 distance of 77.911. The model's superiority was further cemented by its leading overall accuracy of 0.925 and specificity of 0.926, showcasing its robust capability in accurately identifying both lesion and healthy tissue. To further enhance interpretability, Grad-CAM visualizations were employed to highlight the region's most influential predictions, providing insights into its decision-making process. These findings demonstrate that classical ResNet architecture, when combined with modern attention modules, remain highly competitive for medical image segmentation tasks, offering a promising direction for liver tumor detection in clinical practice.
Multi-objective search (MOS) has emerged as a unifying framework for planning and decision-making problems where multiple, often conflicting, criteria must be balanced. While the problem has been studied for decades, recent years have seen renewed interest in the topic across AI applications such as robotics, transportation, and operations research, reflecting the reality that real-world systems rarely optimize a single measure. This paper surveys developments in MOS while highlighting cross-disciplinary opportunities, and outlines open challenges that define the emerging frontier of MOS
We present a vector-based method to balance chemical reactions. The algorithm builds candidates in a deterministic way, removes duplicates, and always prints coefficients in the lowest whole-number form. For redox cases, electrons and protons/hydroxide are treated explicitly, so both mass and charge are balanced. We also outline the basic principles of the vector formulation of stoichiometry, interpreting reactions as integer vectors in composition space, this geometric view supports compact visualizations of reagent-product interactions and helps surface distinct reaction families. The method enumerates valid balances for arbitrary user-specified species lists without special-case balancing rules or symbolic tricks, and it provides a clean foundation for developing new algorithmic variants (e.g., alternative objectives or constraints). On representative examples (neutralization, double displacement, decomposition, classical redox, small multicomponent sets) and a negative control, the method produced correct integer balances. When multiple balances exist, we report a canonical one - minimizing the total coefficient sum with a simple tie-breaker - without claiming global optimality beyond the solutions the search enumerates. The procedure applies per reaction and extends to reaction networks via consistent per-reaction application. We do not report runtimes, broader benchmarking and code/data release are planned.
Agentic AI represents a transformative shift in artificial intelligence, but its rapid advancement has led to a fragmented understanding, often conflating modern neural systems with outdated symbolic models -- a practice known as conceptual retrofitting. This survey cuts through this confusion by introducing a novel dual-paradigm framework that categorizes agentic systems into two distinct lineages: the Symbolic/Classical (relying on algorithmic planning and persistent state) and the Neural/Generative (leveraging stochastic generation and prompt-driven orchestration). Through a systematic PRISMA-based review of 90 studies (2018--2025), we provide a comprehensive analysis structured around this framework across three dimensions: (1) the theoretical foundations and architectural principles defining each paradigm; (2) domain-specific implementations in healthcare, finance, and robotics, demonstrating how application constraints dictate paradigm selection; and (3) paradigm-specific ethical and governance challenges, revealing divergent risks and mitigation strategies. Our analysis reveals that the choice of paradigm is strategic: symbolic systems dominate safety-critical domains (e.g., healthcare), while neural systems prevail in adaptive, data-rich environments (e.g., finance). Furthermore, we identify critical research gaps, including a significant deficit in governance models for symbolic systems and a pressing need for hybrid neuro-symbolic architectures. The findings culminate in a strategic roadmap arguing that the future of Agentic AI lies not in the dominance of one paradigm, but in their intentional integration to create systems that are both adaptable and reliable. This work provides the essential conceptual toolkit to guide future research, development, and policy toward robust and trustworthy hybrid intelligent systems.
AI agents have rapidly gained popularity across research and industry as systems that extend large language models with additional capabilities to plan, use tools, remember, and act toward specific goals. Yet despite their promise, developers face persistent and often underexplored challenges when building, deploying, and maintaining these emerging systems. To identify these challenges, we study developer discussions on Stack Overflow, the world's largest developer-focused Q and A platform with about 60 million questions and answers and 30 million users. We construct a taxonomy of developer challenges through tag expansion and filtering, apply LDA-MALLET for topic modeling, and manually validate and label the resulting themes. Our analysis reveals seven major areas of recurring issues encompassing 77 distinct technical challenges related to runtime integration, dependency management, orchestration complexity, and evaluation reliability. We further quantify topic popularity and difficulty to identify which issues are most common and hardest to resolve, map the tools and programming languages used in agent development, and track their evolution from 2021 to 2025 in relation to major AI model and framework releases. Finally, we present the implications of our results, offering concrete guidance for practitioners, researchers, and educators on agent reliability and developer support.
Formation control simplifies minimizing multi-robot cost functions by encoding a cost function as a shape the robots maintain. However, by reducing complex cost functions to formations, discrepancies arise between maintaining the shape and minimizing the original cost function. For example, a Diamond or Box formation shape is often used for protecting all members of the formation. When more information about the surrounding environment becomes available, a static shape often no longer minimizes the original protection cost. We propose a formation planner to reduce mismatch between a formation and the cost function while still leveraging efficient formation controllers. Our formation planner is a two-step optimization problem that identifies desired relative robot positions. We first solve a constrained problem to estimate non-linear and non-differentiable costs with a weighted sum of surrogate cost functions. We theoretically analyze this problem and identify situations where weights do not need to be updated. The weighted, surrogate cost function is then minimized using relative positions between robots. The desired relative positions are realized using a non-cooperative formation controller derived from Lyapunov's direct approach. We then demonstrate the efficacy of this approach for military-like costs such as protection and obstacle avoidance. In simulations, we show a formation planner can reduce a single cost by over 75%. When minimizing a variety of cost functions simultaneously, using a formation planner with adaptive weights can reduce the cost by 20-40%. Formation planning provides better performance by minimizing a surrogate cost function that closely approximates the original cost function instead of relying on a shape abstraction.
This survey provides an analysis of current methodologies integrating legal and logical specifications into the perception, prediction, and planning modules of automated driving systems. We systematically explore techniques ranging from logic-based frameworks to computational legal reasoning approaches, emphasizing their capability to ensure regulatory compliance and interpretability in dynamic and uncertain driving environments. A central finding is that significant challenges arise at the intersection of perceptual reliability, legal compliance, and decision-making justifiability. To systematically analyze these challenges, we introduce a taxonomy categorizing existing approaches by their theoretical foundations, architectural implementations, and validation strategies. We particularly focus on methods that address perceptual uncertainty and incorporate explicit legal norms, facilitating decisions that are both technically robust and legally defensible. The review covers neural-symbolic integration methods for perception, logic-driven rule representation, and norm-aware prediction strategies, all contributing toward transparent and accountable autonomous vehicle operation. We highlight critical open questions and practical trade-offs that must be addressed, offering multidisciplinary insights from engineering, logic, and law to guide future developments in legally compliant autonomous driving systems.
Trajectory planning in dense, interactive traffic scenarios presents significant challenges for autonomous vehicles, primarily due to the uncertainty of human driver behavior and the non-convex nature of collision avoidance constraints. This paper introduces a stochastic optimal control framework to address these issues simultaneously, without excessively conservative approximations. We opt to model human driver decisions as a Markov Decision Process and propose a method for handling collision avoidance between non-convex vehicle shapes by imposing a positive distance constraint between compact sets. In this framework, we investigate three alternative chance constraint formulations. To ensure computational tractability, we introduce tight, continuously differentiable reformulations of both the non-convex distance constraints and the chance constraints. The efficacy of our approach is demonstrated through simulation studies of two challenging interactive scenarios: an unregulated intersection crossing and a highway lane change in dense traffic.
The Compton Spectrometer and Imager (COSI) is a Compton telescope designed to survey the 0.2 - 5 MeV sky, consisting of a compact array of cross-strip germanium detectors. It is planned to be launched in 2027 into an equatorial low-Earth (530 km) orbit with a prime mission duration of 2 years. The observation of MeV gamma rays is dominated by background, mostly from extragalactic and atmospheric photon but also from the activation of the detector materials induced by cosmic-ray interactions. Thus, background simulation and identification are crucial for the data analysis. In this work we perform Monte Carlo simulations of the background for the first 3 months in orbit, and we extrapolate the results to 2 years in orbit, in order to determine the build-up of the activation due to long-lived isotopes. We determine the rates of events induced by the background that are reconstructed as Compton events in the simulated COSI data. We find that the extragalactic background photons dominate at low energies (<660 keV), while delayed activation from cosmic-ray primaries (proton/alpha) and albedo photons dominate at higher energies. As part of this work, a comparison at low latitude (<1 deg) between recent measurement of the SAA by the High-Energy Particle Detector (HEPD-01) on board the China Seismo-Electromagnetic Satellite (CSES-01) and the AP9/AE9 model has been made, showing an overestimation of the flux by a factor 9 by the model. The systematic uncertainties associated with these components are quantified. This work marks a major step forward in estimating and understanding the expected background rates for the COSI satellite mission.
Handling loosely placed objects with robotic manipulators is a difficult task from the point of view of trajectory planning and control. This becomes even more challenging when the object to be handled is a container filled with liquid. This paper addresses the task of transporting a liquid-filled cup placed on a tray along a prescribed path in shortest time. The objective is to minimize swapping, thus avoiding spillage of the fluid. To this end, the sloshing dynamics is incorporated into the dynamic model used within the optimal control problem formulation. The optimization problem is solved using a direct multiple shooting approach.
Psychiatric comorbidity is clinically significant yet challenging due to the complexity of multiple co-occurring disorders. To address this, we develop a novel approach integrating synthetic patient electronic medical record (EMR) construction and multi-agent diagnostic dialogue generation. We create 502 synthetic EMRs for common comorbid conditions using a pipeline that ensures clinical relevance and diversity. Our multi-agent framework transfers the clinical interview protocol into a hierarchical state machine and context tree, supporting over 130 diagnostic states while maintaining clinical standards. Through this rigorous process, we construct PsyCoTalk, the first large-scale dialogue dataset supporting comorbidity, containing 3,000 multi-turn diagnostic dialogues validated by psychiatrists. This dataset enhances diagnostic accuracy and treatment planning, offering a valuable resource for psychiatric comorbidity research. Compared to real-world clinical transcripts, PsyCoTalk exhibits high structural and linguistic fidelity in terms of dialogue length, token distribution, and diagnostic reasoning strategies. Licensed psychiatrists confirm the realism and diagnostic validity of the dialogues. This dataset enables the development and evaluation of models capable of multi-disorder psychiatric screening in a single conversational pass.
It's important to monitor road issues such as bumps and potholes to enhance safety and improve road conditions. Smartphones are equipped with various built-in sensors that offer a cost-effective and straightforward way to assess road quality. However, progress in this area has been slow due to the lack of high-quality, standardized datasets. This paper discusses a new dataset created by a mobile app that collects sensor data from devices like GPS, accelerometers, gyroscopes, magnetometers, gravity sensors, and orientation sensors. This dataset is one of the few that integrates Geographic Information System (GIS) data with weather information and video footage of road conditions, providing a comprehensive understanding of road issues with geographic context. The dataset allows for a clearer analysis of road conditions by compiling essential data, including vehicle speed, acceleration, rotation rates, and magnetic field intensity, along with the visual and spatial context provided by GIS, weather, and video data. Its goal is to provide funding for initiatives that enhance traffic management, infrastructure development, road safety, and urban planning. Additionally, the dataset will be publicly accessible to promote further research and innovation in smart transportation systems.
Electricity demand and generation have become increasingly unpredictable with the growing share of variable renewable energy sources in the power system. Forecasting electricity supply by fuel mix is crucial for market operation, ensuring grid stability, optimizing costs, integrating renewable energy sources, and supporting sustainable energy planning. We introduce two statistical methods, centering on forecast reconciliation and compositional data analysis, to forecast short-term electricity supply by different types of fuel mix. Using data for five electricity markets in Australia, we study the forecast accuracy of these techniques. The bottom-up hierarchical forecasting method consistently outperforms the other approaches. Moreover, fuel mix forecasting is most accurate in power systems with a higher share of stable fossil fuel generation.
Understanding how urban systems and traffic dynamics co-evolve is crucial for advancing sustainable and resilient cities. However, their bidirectional causal relationships remain underexplored due to challenges of simultaneously inferring spatial heterogeneity, temporal variation, and feedback mechanisms. To address this gap, we propose a novel spatio-temporal causality framework that bridges correlation and causation by integrating spatio-temporal weighted regression with a newly developed spatio-temporal convergent cross-mapping approach. Characterizing cities through urban structure, form, and function, the framework uncovers bidirectional causal patterns between urban systems and traffic dynamics across 30 cities on six continents. Our findings reveal asymmetric bidirectional causality, with urban systems exerting stronger influences on traffic dynamics than the reverse in most cities. Urban form and function shape mobility more profoundly than structure, even though structure often exhibits higher correlations, as observed in cities such as Singapore, New Delhi, London, Chicago, and Moscow. This does not preclude the reversed causal direction, whereby long-established mobility patterns can also reshape the built environment over time. Finally, we identify three distinct causal archetypes: tightly coupled, pattern-heterogeneous, and workday-attenuated, which map pathways from causal diagnosis to intervention. This typology supports city-to-city learning and lays a foundation for context-sensitive strategies in sustainable urban and transport planning.
Engaging the private sector in contraceptive method supply is critical for creating equitable, sustainable, and accessible healthcare systems. To achieve this, it is essential to understand where women obtain their modern contraceptives. While national-level estimates provide valuable insights into overall trends in contraceptive supply, they often obscure variation within and across subnational regions. Addressing localized needs has become increasingly important as countries adopt decentralized models for family planning services. Decentralization has also underscored the need for reliable subnational estimates of key family planning indicators. The absence of regularly collected subnational data has hindered effective monitoring and decision-making. To bridge this gap, we propose a novel approach that leverages latent attributes in Demographic and Health Survey (DHS) data to produce Bayesian probabilistic projections of contraceptive method supply shares (the proportions of modern contraceptive methods supplied by public and private sectors) with limited data. Our modeling framework is built on Bayesian hierarchical models. Using penalized splines to track public and private supply shares over time, we leverage the spatial nature of the data and incorporate a correlation structure between recent supply share observations at national and subnational levels. This framework contributes to the domain of subnational estimation of proportions in data-sparse settings, outperforming comparable and previous approaches. As decentralization continues to reshape family planning services, producing reliable subnational estimates of key indicators is increasingly vital for researchers and policymakers.
Vision-language-action (VLA) models have significantly advanced robotic manipulation by integrating vision-language models (VLMs), and action decoders into a unified architecture. However, their deployment on resource-constrained edge devices, such as mobile robots or embedded systems (e.g., Jetson Orin Nano), remains challenging due to high computational demands, especially in real-world scenarios where power, latency, and computational resources are critical. To close this gap, we introduce Nano-scale Vision-Language Action (NanoVLA), a family of lightweight VLA architectures that achieve high performance with minimal resources. Our core innovations include: (1) vision-language decoupling that moves conventional early vision and language inputs fusion in VLM to late stage, achieving better performance while enabling caching and reduce inference overhead and latency; (2) long-short action chunking to ensure smooth, coherent multi-step planning without sacrificing real-time responsiveness; (3) dynamic routing that adaptively assigns lightweight or heavy backbones based on task complexity, further optimizing inference efficiency. Experimental results on several benchmarks, as well as real-world deployments, demonstrate that NanoVLA achieves up to 52x faster inference on edge devices compared to previous state-of-the-art VLA models, with 98% less parameters while maintaining or surpassing their task accuracy and generalization. Ablation studies confirm that our decoupling strategy preserves cross-task transferability, and the routing module enhances cost-performance trade-offs, enabling practical, high-precision robotic manipulation on resource-constrained hardware.
With the rapid growth of artificial intelligence (AI) and cloud services, data centers have become critical infrastructures driving digital economies, with increasing energy demand heightening concerns over electricity use and carbon emissions, emphasizing the need for carbon-aware infrastructure planning. Most studies assume static power systems, focus only on operational emissions, and overlook co-optimization. This paper proposes a dynamic joint planning framework that co-optimizes long-term data center and power system development over 15 years. The model determines siting, capacity, and type of data centers alongside power generation expansion, storage deployment, and retirements, accounting for both operational and embodied emissions. To handle multi-scale uncertainty, a large-scale two-stage stochastic program is formulated and solved via an enhanced Benders decomposition. Applied to the PJM Interconnection, with curated datasets released on GitHub, results show the system can support up to 55 GW peak data center demand, with Virginia (DOM) and Northern Illinois (ComEd) as optimal hosts. Compared to non-joint planning, the framework cuts investment cost by 12.6%, operational cost by 8.25%, and emissions by 5.63%. Including lifecycle emissions further raises renewable deployment by 25.5%, highlighting embodied carbon's role in deeper decarbonization.
Understanding the dynamics of the spread of diseases within populations is critical for effective public health interventions. We extend the classical SIR model by incorporating additional complexities such as the introduction of a cure and migration between cities. Our framework leverages a system of differential equations to simulate disease transmission across a network of interconnected cities, capturing more realistic patterns. We present theoretical results on the convergence of population sizes in the migration framework (in the absence of deaths). We also run numerical simulations to understand how the timing of the introduction of the cure affects mortality rates. Our numerical results explain how localized interventions affect the spread of the disease across cities. In summary, this work advances the modeling of epidemics to a more local scope, offering a more expressive tool for epidemiological research and public health planning.
This paper presents an integrated robotic fused deposition modeling additive manufacturing system featuring closed-loop thermal control and intelligent in-situ defect correction using a 6-degree of freedom robotic arm and an Oak-D camera. The robot arm end effector was modified to mount an E3D hotend thermally regulated by an IoT microcontroller, enabling precise temperature control through real-time feedback. Filament extrusion system was synchronized with robotic motion, coordinated via ROS2, ensuring consistent deposition along complex trajectories. A vision system based on OpenCV detects layer-wise defects position, commanding autonomous re-extrusion at identified sites. Experimental validation demonstrated successful defect mitigation in printing operations. The integrated system effectively addresses challenges real-time quality assurance. Inverse kinematics were used for motion planning, while homography transformations corrected camera perspectives for accurate defect localization. The intelligent system successfully mitigated surface anomalies without interrupting the print process. By combining real-time thermal regulation, motion control, and intelligent defect detection & correction, this architecture establishes a scalable and adaptive robotic additive manufacturing framework suitable for aerospace, biomedical, and industrial applications.
In this paper, we propose a computationally efficient quadratic programming (QP) approach for generating smooth, $C^1$ continuous paths for mobile robots using piece-wise quadratic Bezier (PWB) curves. Our method explicitly incorporates safety margins within a structured optimization framework, balancing trajectory smoothness and robustness with manageable numerical complexity suitable for real-time and embedded applications. Comparative simulations demonstrate clear advantages over traditional piece-wise linear (PWL) path planning methods, showing reduced trajectory deviations, enhanced robustness, and improved overall path quality. These benefits are validated through simulations using a Pure-Pursuit controller in representative scenarios, highlighting the practical effectiveness and scalability of our approach for safe navigation.
This study applies an optimized XGBoost regression model to estimate district-level expenditures on high-dosage tutoring from incomplete administrative data. The COVID-19 pandemic caused unprecedented learning loss, with K-12 students losing up to half a grade level in certain subjects. To address this, the federal government allocated \$190 billion in relief. We know from previous research that small-group tutoring, summer and after school programs, and increased support staff were all common expenditures for districts. We don't know how much was spent in each category. Using a custom scraped dataset of over 7,000 ESSER (Elementary and Secondary School Emergency Relief) plans, we model tutoring allocations as a function of district characteristics such as enrollment, total ESSER funding, urbanicity, and school count. Extending the trained model to districts that mention tutoring but omit cost information yields an estimated aggregate allocation of approximately \$2.2 billion. The model achieved an out-of-sample $R^2$=0.358, demonstrating moderate predictive accuracy given substantial reporting heterogeneity. Methodologically, this work illustrates how gradient-boosted decision trees can reconstruct large-scale fiscal patterns where structured data are sparse or missing. The framework generalizes to other domains where policy evaluation depends on recovering latent financial or behavioral variables from semi-structured text and sparse administrative sources.
We develop a systematic framework for constructing (3+1)-dimensional topological quantum field theories (TQFTs) that realize specified anomalies of finite symmetries, as encountered in gauge theories with fermions or fermionic lattice systems. Our approach generalizes the Wang-Wen-Witten symmetry-extension construction to the fermionic setting, building on two recent advances in the study of fermionic TQFTs and related homotopy theory. The first is the categorical classification of anomalous TQFTs in (3+1)d. The second, which we develop further in a planned sequel to this paper, is a hastened Adams spectral sequence for computing supercohomology groups, closely paralleling techniques from cobordism theory. By integrating supercohomology and cobordism methods within the recently developed categorical framework of fusion 2-categories, we provide a concrete and systematic route to constructing fermionic TQFTs with specified anomalies, thereby establishing a conceptual bridge between anomaly realization, cobordism, and higher-categorical structures.
Autoregressive video diffusion models are capable of long rollouts that are stable and consistent with history, but they are unable to guide the current generation with conditioning from the future. In camera-guided video generation with a predefined camera trajectory, this limitation leads to collisions with the generated scene, after which autoregression quickly collapses. To address this, we propose Generative View Stitching (GVS), which samples the entire sequence in parallel such that the generated scene is faithful to every part of the predefined camera trajectory. Our main contribution is a sampling algorithm that extends prior work on diffusion stitching for robot planning to video generation. While such stitching methods usually require a specially trained model, GVS is compatible with any off-the-shelf video model trained with Diffusion Forcing, a prevalent sequence diffusion framework that we show already provides the affordances necessary for stitching. We then introduce Omni Guidance, a technique that enhances the temporal consistency in stitching by conditioning on both the past and future, and that enables our proposed loop-closing mechanism for delivering long-range coherence. Overall, GVS achieves camera-guided video generation that is stable, collision-free, frame-to-frame consistent, and closes loops for a variety of predefined camera paths, including Oscar Reutersv\"ard's Impossible Staircase. Results are best viewed as videos at https://andrewsonga.github.io/gvs.
We present a framework for uncovering and exploiting dependencies among tools and documents to enhance exemplar artifact generation. Our method begins by constructing a tool knowledge graph from tool schemas,including descriptions, arguments, and output payloads, using a DeepResearch-inspired analysis. In parallel, we derive a complementary knowledge graph from internal documents and SOPs, which is then fused with the tool graph. To generate exemplar plans, we adopt a deep-sparse integration strategy that aligns structural tool dependencies with procedural knowledge. Experiments demonstrate that this unified framework effectively models tool interactions and improves plan generation, underscoring the benefits of linking tool graphs with domain knowledge graphs for tool-augmented reasoning and planning.
Agentic tool use has gained traction with the rise of agentic tool calling, yet most existing work overlooks the complexity of multi-turn tool interactions. We introduce OrchDAG, a synthetic data generation pipeline that models tool execution as directed acyclic graphs (DAGs) with controllable complexity. Using this dataset, we benchmark model performance and propose a graph-based reward to enhance RLVR training. Experiments show that the dataset presents a challenging but solvable benchmark, and the proposed reward is effective when combined with GRPO-style algorithms, highlighting the importance of leveraging topological structure and data complexity in multi-turn tool use.