frombrowser_useimportAgent, Browser, ChatBrowserUse# from browser_use import ChatGoogle # ChatGoogle(model='gemini-3-flash-preview')# from browser_use import ChatAnthropic # ChatAnthropic(model='claude-sonnet-4-6')importasyncioasyncdefmain():
browser=Browser(
# use_cloud=True, # Use a stealth browser on Browser Use Cloud
)
agent=Agent(
task="Find the number of stars of the browser-use repo",
llm=ChatBrowserUse(),
# llm=ChatGoogle(model='gemini-3-flash-preview'),# llm=ChatAnthropic(model='claude-sonnet-4-6'),browser=browser,
)
awaitagent.run()
if__name__=="__main__":
asyncio.run(main())
frombrowser_useimportAgent, Browser, ChatBrowserUse# from browser_use import ChatGoogle # ChatGoogle(model='gemini-3-flash-preview')# from browser_use import ChatAnthropic # ChatAnthropic(model='claude-sonnet-4-6')importasyncioasyncdefmain():
browser=Browser(
# use_cloud=True, # Use a stealth browser on Browser Use Cloud
)
agent=Agent(
task="Find the number of stars of the browser-use repo",
llm=ChatBrowserUse(),
# llm=ChatGoogle(model='gemini-3-flash-preview'),# llm=ChatAnthropic(model='claude-sonnet-4-6'),browser=browser,
)
awaitagent.run()
if__name__=="__main__":
asyncio.run(main())
Fast, persistent browser automation from the command line:
browser-use open https://example.com # Navigate to URL
browser-use state # See clickable elements
browser-use click 5 # Click element by index
browser-use type"Hello"# Type text
browser-use screenshot page.png # Take screenshot
browser-use close # Close browser
The CLI keeps the browser running between commands for fast iteration. See CLI docs for all commands.
Claude Code Skill
For Claude Code, install the skill to enable AI-assisted browser automation:
Should I use the Browser Use system prompt with the open-source preview model?
Yes. If you use ChatBrowserUse(model='browser-use/bu-30b-a3b-preview') with a normal Agent(...), Browser Use still sends its default agent system prompt for you.
You do not need to add a separate custom "Browser Use system message" just because you switched to the open-source preview model. Only use extend_system_message or override_system_message when you intentionally want to customize the default behavior for your task.
If you want the best default speed/accuracy, we still recommend the newer hosted bu-* models. If you want the open-source preview model, the setup stays the same apart from the model= value.
Can I use custom tools with the agent?
Yes! You can add custom tools to extend the agent's capabilities: