Overview
In this tutorial, we’ll walk you step by step through building an Agent with n8n and the MuleRun API that uses Nano-Banana to generate pixel-art style images. Difficulty: ★☆☆☆☆ (1 / 5) For n8n-related questions, check the official documentation. You can also ask in the n8n community.
- Receive user input from MuleRun
- Use an AI Agent to generate an image
- Display the result in the user’s Session page
Step 1: Create a new n8n workflow
First, you’ll need a running n8n instance. You can either use n8n Cloud or install it locally. Check out this comparison to decide which option fits you best. Create a new workflow, and you’ll see an empty canvas like this:
- Input nodes: receive user requests
- Processing nodes: handle tasks (AI, data processing, etc.)
- Output nodes: send results back to the Session page
Step 2: Add an input node
Currently, MuleRun only supports starting a workflow with then8n Form Trigger
node. More details here.

n8n Form Trigger
node.
You can also search for “n8n Form” and select On new n8n Form Event.

Image Description
. Mark it as required.
The Field Name will appear in the Session form. You can also edit the Form Title and node name for a better display.
Step 3: Add an AI node
Now let’s add an AI Agent node to handle the LLM task.
We recommend only using the AI Agent node for LLM-related work.


Gemini 2.5 Flash Image Preview
model via an OpenAI-compatible API. So, set the provider to OpenAI.
MuleRouter’s LLM APIs (including Gemini) are all OpenAI-compatible. For the best compatibility, you should prioritize using the OpenAI format.

- API Key: your MuleRun API Key
- Base URL:
https://api.mulerun.com/v1
See this reference for API / Base URL details.

gemini-2.5-flash-image-preview
. You’ll find supported IDs in this list.


Image Description
(from Input) onto the Prompt field.

The System Prompt defines the rules the AI follows. The User Prompt comes from the user’s input.
Step 4: Format Model Output
The model returns results, but we need to format them. We only want the generated description and image file.

Pro Tip: If you’re comfortable with JavaScript in n8n, you can handle both in one node.

- Create a field named
ImageBase64
- Drag the image URL into it
Pro Tip: You can use JSON expressions like:
This avoids errors if output order changes.
{{$json.output.find(o => o.type === 'image_url')?.image_url?.url}}
This avoids errors if output order changes.

- Create a field named
ModelTextOutput
- Drag the text field into it
Pro Tip: Use JSON expressions like:
{{$json.output.find(o => o.type === 'text')?.text}}


ImageBase64
and output it as a file named Image
.



Image
and ModelTextOutput
.
Step 5: Output Result
To display results properly in the Session page, output them in a specific JSON format:- Images will render automatically
- Text must go into a
markdown_content
field

Pro Tip: If you know JavaScript in n8n, you can directly pull from previous nodes.
- Name:
markdown_content
- Value:
ModelTextOutput
The field name must be exactly
markdown_content
.Step 6: Test Your Workflow

- A
markdown_content
field - An image
Step 7: Upload Your Agent to MuleRun
Click Download to export your workflow JSON. Open Creator Studio and create a new Agent.
- Name your Agent
- Upload the JSON file
- Choose the correct n8n version


If you want to use models not supported by MuleRun, add additional credentials in n8n and configure separately.

Before submission, the Agent is Unpublished. You can still run it yourself.



A Running Mule
.

Step 8: Submit Your Agent
When everything works, click Submit to send your Agent for review.
Editing an Agent under review will automatically reject the submitted version.