Welcome to Vanilla Breeze
This bell pulls live notifications from /go/notify/messages — the same contract documented at /docs/concepts/service-contracts/. Static articles like this one are the no-JS / no-backend fallback.
This bell pulls live notifications from /go/notify/messages — the same contract documented at /docs/concepts/service-contracts/. Static articles like this one are the no-JS / no-backend fallback.
On-device chat via Chrome's LanguageModel API. Optionally page-aware via a context selector. Provider-neutral inline endpoint and external deep-link fallbacks.
An on-device chat component. Without a context attribute, behaves as a general assistant. With context=", the targeted region's text is folded into the system prompt so the model can answer questions about the surrounding article — no retrieval plumbing required.
Provider resolution per session: local Chrome LanguageModel → inline endpoint → external deep-link → unavailable. See AI page-tools v1 for the full contract.
Add context="#some-selector" and the component reads that region's text on session creation, prepending it to the system prompt under a clear [PAGE CONTENT] delimiter. A ribbon shows character / token counts and warns if the selection exceeds Gemini Nano's window.
<ai-chat context="#article" context-label="this article" placeholder="Ask about this page…"> <template data-role="system"> Answer using only the page content provided. If the answer isn't there, say so plainly. </template> <template data-role="starters"> Summarize the article in 3 bullets. What problem does the author identify? </template></ai-chat>
Long system prompts go in <template data-role="system">. The system attribute wins if both are set.
Starter chips come from <template data-role="starters">, one prompt per line. They render as a chip row under the ribbon and disappear once the conversation has any messages (CSS :has() handles the toggle).
Configure endpoint="…" to route through a server you control when on-device AI isn't available. The component sends { prompt, content, mode: "chat" } as JSON; the server streams text/plain chunks back (or returns one-shot application/json). The chat UI is identical regardless of provider.
<ai-chat context="#article" endpoint="/api/ai/chat" placeholder="Ask about this page…"></ai-chat>
Wire format documented in AI page-tools v1, §C.
Configure fallback-url="…" and the Send button opens the configured URL in a new tab when neither on-device AI nor an inline endpoint is configured. The current draft message is folded into the deep-link {prompt} so the external tool picks up where the user left off.
<ai-chat fallback-url="https://claude.ai/new?q={prompt}" fallback-label="Continue with Claude"></ai-chat>
Lifecycle on data-state: checking, ready, thinking, streaming, downloading, error, unavailable, deep-link. Per-message role on each .ai-chat-msg: data-role="user|assistant|error".
<script type="module">const chat = document.querySelector('ai-chat');chat.addEventListener('ai-chat:state', e => console.log(e.detail.state, e.detail.provider));chat.addEventListener('ai-chat:message', e => console.log(e.detail.role, e.detail.text.length));chat.addEventListener('ai-chat:context-overflow', () => console.warn('context overflow'));chat.addEventListener('ai-chat:error', e => console.error(e.detail.error));</script>