WWW 2026

Curiosity Driven Knowledge Retrieval for Mobile Agents

Sijia Li1, Xiaoyu Tan2, Shahir Ali3, Niels Schmidt3, Gengchen Ma1, Xihe Qiu1
1Shanghai University of Engineering Science, 2National University of Singapore, 3Droidrun

Abstract

Mobile agents have made progress toward reliable smartphone automation, yet performance in complex applications remains limited by incomplete knowledge and weak generalization to unseen environments. We introduce a curiosity driven knowledge retrieval framework that formalizes uncertainty during execution as a curiosity score. When this score exceeds a threshold, the system retrieves external information from documentation, code repositories, and historical trajectories. Retrieved content is organized into structured AppCards, which encode functional semantics, parameter conventions, interface mappings, and interaction patterns. During execution, an enhanced agent selectively integrates relevant AppCards into its reasoning process, thereby compensating for knowledge blind spots and improving planning reliability. Evaluation on the AndroidWorld benchmark shows consistent improvements across backbones, with an average gain of six percentage points and a new state of the art success rate of 88.8% when combined with GPT-5. Analysis indicates that AppCards are particularly effective for multi step and cross application tasks, while improvements depend on the backbone model. Case studies further confirm that AppCards reduce ambiguity, shorten exploration, and support stable execution trajectories.

Results

AndroidWorld Benchmark Comparison

Task success rate (SR, %) on AndroidWorld from the public leaderboard and our method.


Performance Gain from AppCards

Overall success rates (%) on AndroidWorld across all difficulty levels (116 tasks), comparing DroidRun with and without AppCards for different backbone models.


Data Analysis

Table 1 presents the overall comparison on the AndroidWorld benchmark. Compared with existing state-of-the-art methods, DroidRun with AppCards using GPT 5 in version v0.3.9 achieves a success rate of 88.8%, surpassing the previously reported 84.5% of MobileUse and establishing a new publicly available best result. In terms of relative improvement, GPT 5 increases from 84.5% to 88.8%, corresponding to an absolute gain of 4.3 percentage points and a relative increase of approximately 5.1%. For Gemini 2.5 Pro, the performance in version v0.3.3 improves from 63.0% to 69.0%, and in version v0.3.9 improves from 65.5% to 71.6%. Both versions yield gains of around 6 percentage points, with relative improvements in the range of 9–10%. In contrast, Grok 4 fast decreases from 61.2% to 60.3% in version v0.3.9, showing no benefit from AppCards.

Taken together, these results indicate that the effectiveness of AppCards varies across models and versions. When the backbone model has strong reasoning and knowledge integration capabilities, AppCards consistently translate into performance improvements, with GPT 5 achieving breakthrough results. For weaker or stylistically mismatched models, however, AppCards may not be fully utilized and can even introduce negative effects. This suggests that AppCards not only contribute to performance enhancement but also expose critical interactions between external knowledge and model characteristics, offering insights for designing more robust knowledge injection mechanisms.

Table 2 reports detailed performance across different task difficulty levels. In version v0.3.3, Gemini 2.5 Pro combined with AppCards improves from 78.7% to 83.6% on easy tasks, from 55.6% to 61.1% on medium tasks, and from 26.3% to 36.8% on hard tasks. Although improvements appear across all levels, the most substantial gain is observed in hard tasks, with an increase of 10.5 percentage points, highlighting the value of AppCards in guiding models through complex task structures. In version v0.3.9, GPT 5 exhibits even more pronounced improvements. Easy tasks remain nearly unchanged, moving from 90.2% to 91.8%. Medium tasks remain stable at 88.9%. Hard tasks, however, increase dramatically from 57.9% to 78.9%, yielding a 21.0 percentage point improvement. This underscores the critical role of AppCards in bridging knowledge gaps under complex conditions. Gemini 2.5 Pro in the same version shows consistent improvements of 4.9, 8.3, and 5.2 percentage points across easy, medium, and hard tasks, respectively, reflecting balanced benefits. By contrast, Grok 4 fast shows modest gains on easy and medium tasks but experiences a sharp decline on hard tasks, dropping from 52.6% to 31.6%, a decrease of 21.0 percentage points, indicating that its knowledge integration ability is insufficient to leverage AppCards in demanding scenarios.

Overall, these stratified experiments demonstrate that AppCards provide the greatest benefit in complex tasks, substantially enhancing long-horizon reasoning and cross-application operations. At the same time, the experiments reveal clear divergences among models. Strong backbones are able to reliably absorb external knowledge and achieve significant gains, whereas weaker models may encounter instability. This contrast shows that AppCards not only improve average performance but also provide concrete evidence for understanding the mechanisms of interaction between external knowledge and model capabilities.

Trajectory Demonstration

AppCard Construction Guide

Figures

Overview of the curiosity-driven retrieval pipeline.
Overall framework of the curiosity driven knowledge retrieval system for mobile agents. Task execution is guided by AppCards. Uncertainty estimation produces curiosity signals that activate external retrieval. Retrieved knowledge is consolidated to update AppCards and the updated AppCards are reintegrated into the execution pipeline.
JS-divergence based curiosity estimation.
JS-divergence based curiosity estimation. The agent predicts the next interface state from the current state and action as a prior distribution, then observes the next state as a posterior distribution. The divergence between these distributions is measured with a tail adjusted Jensen Shannon divergence, yielding an information gain signal quantifying curiosity.
Case study: baseline vs AppCard-enhanced execution.
Case study of the task Expense Add Multiple From Gallery. The baseline path on the left fails due to application name ambiguity, while the AppCard enhanced path on the right leverages structured knowledge to enable stable and successful task execution.