Search for a tool to use
Run Llama 3.2 Instruct model in your browser. All processing happens locally in your browser.