In a new tutorial, engineers outline how to build an intelligent agent that adapts at every turn. The system taps into Google’s Gemini API through the SAGE framework, short for Self-Adaptive Goal-oriented Execution. It includes four main modules:
- Self-Assessment, which tracks current performance
- Adaptive Planning, which adjusts the task list
- Goal-oriented Execution, which focuses on mission targets
- Experience Integration, which applies past findings to fresh plans
By combining these elements, the agent can break a broad mission into clear steps, map out a path to completion, perform each task precisely, and revise its course based on lessons from prior runs. This hands-on guide reveals the architecture behind AI-powered decision making.
At the outset, the code imports key packages. The google.generativeai library connects to the Gemini model. Core Python modules—json for data interchange, time for performance tracking, and dataclasses to structure task objects—form the foundation for task management. A TaskStatus enum then assigns each item a state: pending, in progress, completed, or failed.
The next step defines a Task data class decorated by @dataclass, capturing fields such as an identifier, a text description, a numerical priority, and any dependencies on other tasks. Then arrives the SAGEAgent class, which serves as the orchestrator. It loops through self-assessment to gauge overall progress, computes an adaptive plan, executes each task in sequence, and logs outcome details. The results feed back into the agent’s internal memory to sharpen future performance.
To illustrate its capabilities, the guide sets up a real-world example centered on sustainable urban gardening. After initializing SAGEAgent with a valid Gemini API key and memory settings, the agent receives the high-level goal and launches the full SAGE cycle. During each pass, the system dynamically generates new tasks, flags completed items, captures errors, and logs any roadblocks for review so developers can refine parameters in later runs.
Once the run ends, the system prints a summary report. Readers see progress ratings for each stage, counts of tasks in different states, and a record of learned insights. Those data points highlight how effectively the agent met its objectives and point to areas for fine-tuning.
The modular layout allows developers to slot in extra components or expand into multi-agent configurations. It can adapt to specific domains or scale across larger workflows. This flexible design opens the door to more ambitious projects that rely on automated, self-improving routines.

