Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file modified docs/source/_static/bedrock-chat-basemodel.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/source/_static/bedrock-model-access.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/source/_static/bedrock-model-select.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/_static/chat-at-mention.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/_static/chat-attach-file.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/source/_static/chat-explain-code-output.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/source/_static/chat-generate-command-response.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/source/_static/chat-generate-input.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/source/_static/chat-history-context-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/source/_static/chat-history-context-2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/source/_static/chat-interface-selection.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/_static/chat-interface.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/_static/chat-new.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/_static/chat-newchat.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/_static/chat-prompt.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/source/_static/chat-replace-selection-input.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/_static/chat-response.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/_static/chat-second.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/_static/chat-ydoc.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/source/_static/fix-error-cell-selected.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/source/_static/fix-response.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/source/_static/jupyter-ai-screenshot.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/_static/jupyternaut-settings.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/_static/magics-alias-usage.png
Binary file added docs/source/_static/magics-dealias.png
Binary file added docs/source/_static/magics-example-openai.png
Binary file added docs/source/_static/magics-format-code.png
Binary file added docs/source/_static/magics-list-all.png
Binary file added docs/source/_static/magics-list-any-provider.png
Binary file added docs/source/_static/magics-list-help.png
Binary file added docs/source/_static/magics-list.png
Binary file added docs/source/_static/magics-set-alias.png
Binary file added docs/source/_static/notebook-chat.png
Binary file modified docs/source/_static/ollama-settings.png
48 changes: 41 additions & 7 deletions docs/source/index.md
Original file line number Diff line number Diff line change
@@ -1,30 +1,64 @@
# Jupyter AI
# Jupyter AI v3

Welcome to Jupyter AI, which brings generative AI to Jupyter. Jupyter AI provides a user-friendly
and powerful way to explore generative AI models in notebooks and improve your productivity
in JupyterLab and the Jupyter Notebook. More specifically, Jupyter AI offers:

* An `%%ai` magic that turns the Jupyter notebook into a reproducible generative AI playground.
- A native chat UI in JupyterLab that enables you to work with generative AI as a conversational assistant, and also enables interaction with the active notebook.
- An `%%ai` magic that turns the Jupyter notebook into a reproducible generative AI playground.
This works anywhere the IPython kernel runs (JupyterLab, Jupyter Notebook, Google Colab, VSCode, etc.).
* A native chat UI in JupyterLab that enables you to work with generative AI as a conversational assistant.
* Support for a wide range of generative model providers and models
(AI21, Anthropic, Cohere, Gemini, Hugging Face, MistralAI, OpenAI, SageMaker, NVIDIA, etc.).
- Support for a wide range of generative model providers and models
(AI21, Amazon, Anthropic, Cohere, Gemini, Hugging Face, MistralAI, OpenAI, NVIDIA, etc.).
- Multiple editable chat threads are available, each thread saved to a separate Jupyter server document with extension `.chat`.
- Real time collaboration (RTC) is enabled in both chat and Jupyter notebook, if cloud deployments support it.
- Support for hundreds of LLMs from many additional providers.
- Chat personas with agentic capabilities, with a default `Jupyternaut` persona.

Below is the look and feel of Jupyter AI v3. You can see the chat panel on the left and the notebooks on the right. The chat panel shows the Jupyternaut persona responding to prompts in the chat, as well as interacting with the code as context from the notebook on the right. The right panel shows the use of the `%%ai` magics commands.

<img src="_static/jupyter-ai-screenshot.png"
alt='A screenshot of Jupyter AI showing the chat interface and the magic commands'
width="95%"
class="screenshot" />

## JupyterLab support

**Each major version of Jupyter AI supports *only one* major version of JupyterLab.** Jupyter AI 1.x supports
**Each major version of Jupyter AI supports _only one_ major version of JupyterLab.** Jupyter AI 1.x supports
JupyterLab 3.x, and Jupyter AI 2.x supports JupyterLab 4.x. The feature sets of versions 1.0.0 and 2.0.0
are the same. We will maintain support for JupyterLab 3 for as long as it remains maintained.
are the same. We will maintain support for JupyterLab 3 for as long as it remains maintained. Jupyter AI v3 supports JupyterLab 4.x.

The `main` branch of Jupyter AI targets the newest supported major version of JupyterLab. All new features and most bug fixes will be
committed to this branch. Features and bug fixes will be backported
to work on JupyterLab 3 only if developers determine that they will add sufficient value.
**We recommend that JupyterLab users who want the most advanced Jupyter AI functionality upgrade to JupyterLab 4.**

## Quickstart

It is best to install `jupyter-ai` in an environment. Use [conda](https://conda.io/projects/conda/en/latest/user-guide/install/index.html) to create an environment that uses Python 3.13 and the latest version of JupyterLab:

$ conda create -n jupyter-ai python=3.13 jupyterlab
$ conda activate jupyter-ai

To install both the `%%ai` magic and the JupyterLab extension, you can run:

$ pip install jupyter-ai==<version number>

Choose the version number, the latest version is `3.0.0b9`.

For an installation with all related packages, use:

$ pip install "jupyter-ai[all]"==<version number>

To start Jupyter AI in Jupyter Lab, run

```
jupyter lab
```

You should see an interface similar to the one above. Use the `+ Chat` button in the chat panel to open a chat thread and enter prompts. In the chat box enter `@` to see a list of personas, and you can select one before entering your query.

To connect a LLM for use in your chat threads you can select the `Settings` dropdown menu, select `Jupyternaut settings` in it to see the settings panel, in which you can select a chat model, specify model parameters if needed, and also add API keys for using LLMs that require it.

## Contents

```{toctree}
Expand Down
24 changes: 14 additions & 10 deletions docs/source/users/bedrock.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,34 +2,38 @@

[(Return to the Chat Interface page)](index.md#amazon-bedrock-usage)

Bedrock supports many language model providers such as AI21 Labs, Amazon, Anthropic, Cohere, Meta, and Mistral AI. To use the base models from any supported provider make sure to enable them in Amazon Bedrock by using the AWS console. You should also select embedding models in Bedrock in addition to language completion models if you intend to use retrieval augmented generation (RAG) on your documents.
Bedrock supports many language model providers such as Amazon, Anthropic, Arcee AI, AutoGluon, BRIA AI, Camb.ai, Cohere, DeepSeek, Google, HuggingFace, IBM, Inception, Liquid AI, Meta, Mistral AI, Moonshot, NVIDIA, OpenAI, Qwen, Stability, Writer, etc., this is a sample of the many providers that are available. To use the base models from any supported provider make sure to enable them in Amazon Bedrock by using the AWS console. You should also select embedding models in Bedrock in addition to language completion models if you intend to use retrieval augmented generation (RAG) on your documents.

Go to Amazon Bedrock and select `Model Access` as shown here:

<img src="../_static/bedrock-model-access.png"
width="75%"
width="95%"
alt='Screenshot of the left panel in the AWS console where Bedrock model access is provided.'
class="screenshot" />

Click through on `Model Access` and follow the instructions to grant access to the models you wish to use, as shown below. Make sure to accept the end user license (EULA) as required by each model. You may need your system administrator to grant access to your account if you do not have authority to do so.
Click through on `Model Catalog` to see all the available models. Serverless foundation models are now automatically enabled across all AWS commercial regions when first invoked in your account, so you can start using them instantly. You no longer need to manually activate model access through this page. Note that for Anthropic models, some first-time users may need to submit use case details before they can access the model. For serverless models served from AWS Marketplace, a user with AWS Marketplace permissions must invoke the model once to enable it account-wide. After this one-time enablement, all users can access the model without needing these permissions.

Account administrators retain full control over model access through [IAM policies](https://docs.aws.amazon.com/bedrock/latest/userguide/security_iam_id-based-policy-examples.html) and [Service Control Policies](https://aws.amazon.com/blogs/security/unlock-new-possibilities-aws-organizations-service-control-policy-now-supports-full-iam-language/) to restrict model access as needed.

To get started, simply select a model from the Model catalog and open it in the playground or invoke the model using the [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) or [Converse API](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html) operations. Note that for Anthropic models, some first-time users may need to submit use case details before they can access the model. Review our documentation for the complete list of available models.

All Bedrock serverless foundation model EULAs can be accessed [here](https://aws.amazon.com/legal/bedrock/third-party-models/). EULAs can also be accessed from the model details page in the Model catalog.

<img src="../_static/bedrock-model-select.png"
width="75%"
width="95%"
alt='Screenshot of the Bedrock console where models may be selected.'
class="screenshot" />

You should also select embedding models in addition to language completion models if you intend to use retrieval augmented generation (RAG) on your documents.

You may now select a chosen Bedrock model from the drop-down menu box title `Completion model` in the chat interface. If RAG is going to be used then pick an embedding model that you chose from the Bedrock models as well. An example of these selections is shown below:
You may now select a chosen Bedrock model from the drop-down menu box titled `Chat model` in the Jupyternaut settings tab (via the `Settings` dropdown). An example of the bedrock provider models is shown:

<img src="../_static/bedrock-chat-basemodel.png"
width="50%"
width="75%"
alt='Screenshot of the Jupyter AI chat panel where the base language model and embedding model is selected.'
class="screenshot" />

If your provider requires an API key, please enter it in the box that will show for that provider. Make sure to click on `Save Changes` to ensure that the inputs have been saved.

Bedrock also allows custom models to be trained from scratch or fine-tuned from a base model. Jupyter AI enables a custom model to be called in the chat panel using its `arn` (Amazon Resource Name). A fine-tuned model will have your 12-digit customer number in the ARN:
<!-- Bedrock also allows custom models to be trained from scratch or fine-tuned from a base model. Jupyter AI enables a custom model to be called in the chat panel using its `arn` (Amazon Resource Name). A fine-tuned model will have your 12-digit customer number in the ARN:

<img src="../_static/bedrock-chat-custom-model-arn.png"
width="75%"
Expand Down Expand Up @@ -84,6 +88,6 @@ Amazon Bedrock now permits cross-region inference, where a model hosted in a dif
1. Bedrock Base models: All available models will already be available in the drop down model list. The above interface also allows use of base model IDs or ARNs, though this is unnecessary as they are in the dropdown list.
2. Bedrock Custom models: If you have fine-tuned a Bedrock base model you may use the ARN for this custom model. Make sure to enter the correct provider information, such as `amazon`, `anthropic`, `cohere`, `meta`, `mistral` (always in lower case).
3. Provisioned Models: These are models that run on dedicated endpoints. Users can purchase Provisioned Throughput Model Units to get faster throughput. These may be base or custom models. Enter the ARN for these models in the Model ID field.
4. Cross-region Inference: Use the Inference profile ID for the cross-region model instead of the ARN.
4. Cross-region Inference: Use the Inference profile ID for the cross-region model instead of the ARN. -->

[(Return to the Chat Interface page)](index.md#amazon-bedrock-usage)
Loading