AI Chat with your Website
Crawl an entire website — docs, blogs, wikis — and chat with AI to ask questions across all pages. Get cross-page summaries, find specific information, and extract insights.
hub Website
Only crawl pages under the URL path you enter
Skip pages the site asks crawlers not to visit
Paste a URL to start crawling. We'll extract text from all linked pages within the same site.
forum Chat
Let's chat about a website
Enter a URL in the left panel to crawl a website and start chatting across all its pages.
Crawl a website to start chatting
How It Works
1. Enter a URL & Crawl
Paste any website URL and click Crawl. We'll follow links breadth-first, extracting text from each page. Watch real-time progress as pages are discovered.
2. Smart Indexing
Text from each page is chunked, deduplicated across pages (no repeated navs/footers), and embedded into a vector index. Each chunk remembers which page it came from.
3. Chat with Source Citations
Ask questions and the AI finds the most relevant chunks across all pages. Answers include which page the information came from — full source attribution.
Use Cases
Documentation Sites
Crawl your framework's docs and ask "How do I set up authentication?" — the AI searches across all pages to find the answer with source links.
Blog Archives
Index an entire blog and ask questions that span multiple posts. "What are the author's main arguments about testing?" pulls from across all articles.
Company Knowledge Bases
Crawl internal wikis or knowledge bases and find answers across hundreds of articles. Perfect for onboarding or quick reference.
Research & Tutorials
Index multi-page tutorials or course sites. Ask targeted questions and get answers with references to the specific lesson page.
Try an Example
Click any card below to start crawling a website and chat with its content.
Frequently Asked Questions
How does this differ from Webpage Chat? expand_more
Is my data safe? expand_more
Does it respect robots.txt? expand_more
robots.txt file and skips pages that are disallowed. You can toggle this off if you need to crawl pages blocked by robots.txt for private analysis (e.g., your own site's docs behind a robots.txt rule).