J’ai commencé apprendre ‘l’imparfait’. Par exemple, ‘être’ en imparfait est j’étais, tu étais, il/elle était, nous étions, vous étiez, nous étaient. Pour ‘avoir’, c’est j’avais, tu avais, il/elle avait, nous avions, vous aviez, ils/elles avaient. Avec le -er verbe ‘aller’ c’est j’allais, tu allais, il/elle allait, nous allions, vous alliez, ils/elles allaient.
French
English
Notes
<Ça te dit de> venir avec nous ?
Tu désires ou tu veux. On peut dire aussi ‘que tu dit?’ et ‘Comment environ’
<Ça vous dit de> venir avec nous ?
Il trouve toujours de l’argent dans la rue. <Il a de la chance.>
<Nous allons louer> une maison pendant une semaine
payer pour rester dans la maison
Je suis arrivé <en avance> à la gare.
avant l’heure
Souhaiter quelque chose.
je souhaite, tu souhaites, il/elle souhaite, vous souhaitons, ils/elles souhaitent
Nous <y sommes allés> en train
Nous <sommes allés là> en train
Alors, avez-vous passé du temps <en Europe de l’Est>?
Quand on <fait une randonnée>, on se promène longtemps, en général à pied.
Faire une randonnée: ‘prendre une randonnée; Faire de la randonnée: ‘Je vais faire de la randonnée’
J’adore <skier>
faire du ski vs skier: J’aime beaucoup skier. Je fais du ski bientôt.
Vous connaissez <les Pyrénées>?
Je veux aller skier en France cet hiver. D’accord. Ça te dit d’aller dans les Pyrénées?
Nous allons dans <les Alpes françaises>
Nous avons <un pneu crevé>! Et nous n’avons pas de <roue de secours>.
un pneu sans l’air;
Et tu <m’as ouvert> la porte. Tu <m’as surprise> avec des fleurs!
En passé composé
On y est! Entrons.
C’est ça; ici allons y;
sentiers
On fait de la randonnée sur sentiers.
Les vues sont incroyable du sommet
Il faut lui souhaiter bon chance
vs Tu dois lui souhaiter bon chance
Ça prend
sourire
à mon époque, on faisait moins de vélo que maintenant
l’imparfait:
Nous étions jeunes et beaux
Nous avion une voiture jaune quand nous étions plus jeunes
Ils pouvaient danser <pendant des heures> quand ils avaient vingt ans.
Avant, il faisaient la cuisine chez eux
La machine à laver et le micro-ondes sont <des appareil électriques> de la maison.
Quand j’étais petit, mes parent avaient seulement un appareil électrique chez eux.
À mon époque, On faisait moins de vélo que maintenant
J’ai commencé apprendre ‘l’imparfait’. Par exemple, ‘être’ en imparfait est j’étais, tu étais, il/elle était, nous étions, vous étiez, nous étaient. Pour ‘avoir’, c’est j’avais, tu avais, il/elle avait, nous avions, vous aviez, ils/elles avaient. Avec le -er verbe ‘aller’ c’est j’allais, tu allais, il/elle allait, nous allions, vous alliez, ils/elles allaient.
French
English
Notes
<Ça te dit de> venir avec nous ?
Tu désires ou tu veux. On peut dire aussi ‘que tu dit?’ et ‘Comment environ’
<Ça vous dit de> venir avec nous ?
Il trouve toujours de l’argent dans la rue. <Il a de la chance.>
<Nous allons louer> une maison pendant une semaine
payer pour rester dans la maison
Je suis arrivé <en avance> à la gare.
avant l’heure
Souhaiter quelque chose.
je souhaite, tu souhaites, il/elle souhaite, vous souhaitons, ils/elles souhaitent
Nous <y sommes allés> en train
Nous <sommes allés là> en train
Alors, avez-vous passé du temps <en Europe de l’Est>?
Quand on <fait une randonnée>, on se promène longtemps, en général à pied.
Faire une randonnée: ‘prendre une randonnée; Faire de la randonnée: ‘Je vais faire de la randonnée’
J’adore <skier>
faire du ski vs skier: J’aime beaucoup skier. Je fais du ski bientôt.
Vous connaissez <les Pyrénées>?
Je veux aller skier en France cet hiver. D’accord. Ça te dit d’aller dans les Pyrénées?
Nous allons dans <les Alpes françaises>
Nous avons <un pneu crevé>! Et nous n’avons pas de <roue de secours>.
un pneu sans l’air;
Et tu <m’as ouvert> la porte. Tu <m’as surprise> avec des fleurs!
En passé composé
On y est! Entrons.
C’est ça; ici allons y;
sentiers
On fait de la randonnée sur sentiers.
Les vues sont incroyable du sommet
Il faut lui souhaiter bon chance
vs Tu dois lui souhaiter bon chance
Ça prend
sourire
à mon époque, on faisait moins de vélo que maintenant
l’imparfait:
Nous étions jeunes et beaux
Nous avion une voiture jaune quand nous étions plus jeunes
Ils pouvaient danser <pendant des heures> quand ils avaient vingt ans.
Avant, il faisaient la cuisine chez eux
La machine à laver et le micro-ondes sont <des appareil électriques> de la maison.
Quand j’étais petit, mes parent avaient seulement un appareil électrique chez eux.
À mon époque, On faisait moins de vélo que maintenant
How to build an agent using LangGraph. Try code locally by downloading from Github or run it in a Kaggle notebook.
"""
Build an agent with LangGraph
Google Gen AI 5-Day Intensive Course
Host: Kaggle
Day: 3
Codelab: https://www.kaggle.com/code/markishere/day-3-building-an-agent-with-langgraph/#IMPORTANT!
"""
from pprint import pprint
from typing import Annotated, Literal
from google import genai
from IPython.display import Image
from langchain_core.messages.ai import AIMessage
from langchain_core.tools import tool
from langchain_google_genai import ChatGoogleGenerativeAI
from langgraph.graph import END, START, StateGraph
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode
from typing_extensions import TypedDict
from collections.abc import Iterable
from random import randint
from langchain_core.messages.tool import ToolMessage
class OrderState(TypedDict):
"""State representing the customer's order conversation."""
# Preserves the conversaton history between nodes.
# The 'add messages' annotation indicates to LangGraphthat state
# is updated by appending returned messages, not replacing them.
messages: Annotated[list, add_messages]
# The customer's in-progress order.
order: list[str]
# Flag indicating that the order is placed and completed.
finished: bool
# The system instruction defines how the chatbot is expected to behave
# and includes rules for when to call different functions,
# as well as rules for the conversation, such as tone and what is permitted
# for discussion.
BARISTABOT_SYSINT = (
# 'system' indicates the message is a system instruction.
"system",
"You are a BaristaBot, an interactive cafe ordering system. A human will talk to you about the "
"available products you have and you will answer any questions about menu items (and only about "
"menu items - no off-topic discussion, but you can chat about the products and their history). "
"The customer will place an order for 1 or more items from the menu, which you will structure "
"and send to the ordering system after confirming the order with the human. "
"\n\n"
"Add items to the customer's order with add_to_order, and reset the order with clear_order. "
"To see the contents of the order so far, call get_order (this is shown to you, not the user) "
"Always confirm_order with the user (double-check) before calling place_order. Calling confirm_order will "
"display the order items to the user and returns their response to seeing the list. Their response may contain modifications. "
"Always verify and respond with drink and modifier names from the MENU before adding them to the order. "
"If you are unsure a drink or modifier matches those on the MENU, ask a question to clarify or redirect. "
"You only have the modifiers listed on the menu. "
"Once the customer has finished ordering items, Call confirm_order to ensure it is correct then make "
"any necessary updates and then call place_order. Once place_order has returned, thank the user and "
"say goodbye!"
"\n\n"
"If any of the tools are unavailable, you can break the fourth wall and tell the user that "
"they have not implemented them yet and should keep reading to do so.",
)
# This is the message with which the system opens the conversation.
WELCOME_MSG = "Welcome to the BaristaBot cafe. Type `q` to quit. How may I serve you today?"
# Define chatbot node
# This node will represent a single turn in a chat conversation
#
# Try using different models. The Gemini 2.0 flash model is highly capable, great with tools,
# and has a generous free tier. If you try the older 1.5 models, note that the 'pro' models are
# better at complex multi-tool cases like this, but the 'flas' models are faster and have more
# free quota.
#
# Check out the features and quota differences here:
# - https://ai.google.dev/gemini-api/docs/models/gemini
llm = ChatGoogleGenerativeAI(model="gemini-2.0-flash")
def chatbot(state: OrderState) -> OrderState:
"""The chatbot itself. A simple wrapper around the model's own chat interface."""
message_history = [BARISTABOT_SYSINT] + state["messages"]
return {"messages": [llm.invoke(message_history)]}
# Set up the initial graph based on our state definition.
graph_builder = StateGraph(OrderState)
# Add the chatbot function to the app graph as a node called 'chatbot'.
graph_builder.add_node("chatbot", chatbot)
# Define the chatbot node as the app entrypoint.
graph_builder.add_edge(START, "chatbot")
chat_graph = graph_builder.compile()
# Render the graph to visualize it.
Image(chat_graph.get_graph().draw_mermaid_png())
# The defined graph only as one node.
# So the chat will begin at __start__, execute the chatbot node and terminate.
user_msg = "Hello, what can you do?"
state = chat_graph.invoke({"messages": [user_msg]})
# The state object contains lots of informaton. Uncomment the pprint lines to see it all.
pprint(state)
# Note that the final state now has 2 messages. Our HumanMessage, and an additional AIMessage.
for msg in state["messages"]:
print(f"{type(msg).__name__}: {msg.content}")
# Can be executed as Python loop
# Here it is manually invoked once
user_msg2 = "Oh great, what kinds of latte can you make?"
state["messages"].append(user_msg2)
state = chat_graph.invoke(state)
# pprint(state)
for msg in state["messages"]:
print(f"{type(msg).__name__}: {msg.content}")
# Add a human node
# LangGraph can be looped between nodes
# This node will display the last message from the LLM to the user,
# then prompt them for their next input.
def human_node(state: OrderState) -> OrderState:
"""Display the last model message to the user and get the user's input."""
last_msg = state["message"][-1]
print("Model:", last_msg.content)
user_input = input("User: ")
# If it looks like the user is trying to quit, flag the conversaton as over.
if user_input in {"q", "quit", "exit", "goodbye"}:
state["finished"] = True
return state | {"messages": [("user", user_input)]}
def chatbot_with_welcome_msg(state: OrderState) -> OrderState:
"""The chatbot itself. A wrapper around the model's own chat interface."""
if state["messages"]:
# If there are messages, continue the conversation with the Gemini model.
new_output = llm.invoke([BARISTABOT_SYSINT]) + state["messages"]
else:
# If there are no messages, start with the welcome message.
new_output = AIMessage(content=WELCOME_MSG)
return state | {"messages": [new_output]}
# Start building a new graph.
graph_builder = StateGraph(OrderState)
# Add the chatbot and human nodes to the app graph.
graph_builder.add_node("chatbot", chatbot_with_welcome_msg)
graph_builder.add_node("human", human_node)
# Start with the chatbot again.
graph_builder.add_edge(START, "chatbot")
# The chatbot will always go to the human next.
graph_builder.add_edge("chatbot", "human")
# Create a conditional edge
def maybe_exit_human_node(state: OrderState) -> Literal["chatbot", "__end__"]:
"""Route to the chatbot, unless it looks like the user is exiting."""
if state.get("finished", False):
return END
else:
return "chatbot"
graph_builder.add_conditional_edges("human", maybe_exit_human_node)
chat_with_human_graph = graph_builder.compile()
Image(chat_with_human_graph.get_graph().draw_mermaid_png)
# The default recursion limit for traversing nodes is 25 - setting it higher means
# you can try a more complex order with multiple steps and round-trips and you can chat for longer!
config = {"recursion_limit": 100}
# Remember that this will loop forever, unless you input 'q', 'quit' or one of the other exit terms
# defined in 'human_node'.
# Uncomment this line to execute the graph:
# state = chat_with_human_graph.invoke({"messages": []}, config)
#
# Things to try:
# - Just chat! There's no ordering or menu yet.
# - 'q' to exit.
pprint(state)
# Add a "live" menu
# To create a dynamic menu to respond to changing stock levels
# There are two types of tools: stateless and stateful
# Stateless tools run automatically: get current menu: it doesn't make changes
# Stateful tools modify the order
# In LangGraph Python functions can be annotated as tools by applying @tools annotation
@tool
def get_menu() -> str:
"""Provide the latest up-to-date menu."""
# Note that this is just hard-coded text, but you could connect this to a live stock
# database, or you could use Gemini's multi-modal capabilities and take like photos
# of your cafe's chalk menu or the products on the counter, and assemble them into an input.
return """
MENU:
Coffee Drinks:
Espresso
Americano
Cold Brew
Coffee Drinks with Milk:
Latte
Cappuccino
Cortado
Macchiato
Mocha
Flat White
Tea Drinks:
English Breakfast Tea
Green Tea
Earl Grey
Tea Drinks with Milk:
Chai Latte
Matcha Latte
London Fog
Other Drinks:
Steamer
Hot Chocolate
Modifiers:
Milk options: Whole, 2%, Oat, Almond, 2% Lactose Free; Default option: whole
Espresso shots: Single, Double, Triple, Quadruple; default: Double
Caffeine: Decaf, Regular; default: Regular
Hot-Iced: Hot, Iced; Default: Hot
Sweeteners (option to add one or more): vanilla sweetener, hazelnut sweetener, caramel sauce, chocolate sauce, sugar free vanilla sweetener
Special requests: any reasonable modification that does not involve items not on the menu, for example: 'extra hot', 'one pump', 'half caff', 'extra foam', etc.
"dirty" means add a shot of espresso to a drink that doesn't usually have it, like "Dirty Chai Latte".
"Regular milk" is the same as 'whole milk'.
"Sweetened" means add some regular sugar, not a sweetener.
Soy milk has run out of stock today, so soy is not available.
"""
# Add the tool to the graph
# Define the tools and create a "tools" node.
tools = [get_menu]
tool_node = ToolNode(tools)
# Attach the tools to the model so that it knows what it can call.
llm_with_tools = llm.bind_tools(tools)
def maybe_route_to_tools(state: OrderState) -> Literal["tools", "human"]:
"""Route between human or tool nodes, depending if a tool call is made."""
if not (msgs := state.get("messages", [])):
raise ValueError(f"No messages found when parsing state: {state}")
# Only route based on the last message.
msg = msgs[-1]
# When the chatbot returns tool_calls, rout to the "tools" node.
if hasattr(msg, "tool_calls") and len(msg.tool_calls) > 0:
return "tools"
else:
return "human"
def chatbot_with_tools(state: OrderState) -> OrderState:
"""The chatbot with tools. A simple wrapper around the model's own chat interface."""
defaults = {"order": [], "finished": False}
if state["messages"]:
new_output = llm_with_tools.invoke(
[BARISTABOT_SYSINT] + state["messages"]
)
else:
new_output = AIMessage(content=WELCOME_MSG)
# Set up some defaults if not already set, then pass through the provided state,
# overriding only the "messages" field.
return defaults | state | {"messages": [new_output]}
graph_builder = StateGraph(OrderState)
# Add the nodes, including the new tool_node.
graph_builder.add_node("chatbot", chatbot_with_tools)
graph_builder.add_node("human", human_node)
graph_builder.add_node("tools", tool_node)
# Chatbot may go to tools, or human.
graph_builder.add_conditional_edges("chatbot", maybe_route_to_tools)
# Human may go back to chatbot, or exit.
graph_builder.add_conditional_edges("human", maybe_exit_human_node)
# Tools always route back to chat afterwards.
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
graph_with_menu = graph_builder.compile()
Image(graph_with_menu.get_graph().draw_mermaid_png())
# Remember that you have not implemented ordering yet, so this will loop forever,
# unless you input `q`, `quit` or one of the other exit terms defined in the
# `human_node`.
# Uncomment this line to execute the graph:
state = graph_with_menu.invoke({"messages": []}, config)
# Things to try:
# - I'd love an espresso drink, what have you got?
# - What teas do you have?
# - Can you do a long black? (this is on the menu as an "Americano" - see if it can
# figure it out)
# - 'q' to exit.
pprint(state)
# Handle orders
# Update state to track an order and provide simple tools that update the state.
# You will need to be explicit as the model should not directly have access to
# the app's internal state.
#
# These functions have no body; LangGraph does not allow @tools to update the
# conversation state, so you will implement a separate node to handle state
# updates.
@tool
def add_to_order(drink: str, modifiers: Iterable[str]) -> str:
"""Adds the specified drink to the customer's order, including any modifiers.
Returns:
The updated order in progress.
"""
@tool
def confirm_order() -> str:
"""Asks customer if the order is correct.
Returns:
The user's free-text response.
"""
@tool
def get_order() -> str:
"""Returns the users order so far. One item per line."""
@tool
def clear_order():
"""Removes all items from the user's order."""
@tool
def place_order() -> int:
"""Sends the order to the barista for fulfillment.
Returns:
The estimated number of minutes until the order is ready.
"""
def order_node(state: OrderState) -> OrderState:
"""The ordering node. This is where the order state is manipulated."""
tool_msg = state.get("messages", [])[-1]
order = state.get("order", [])
outbound_msgs = []
order_placed = False
for tool_call in tool_msg.tool_calls:
if tool_call["name"] == "add_to_order":
# Each order item is just a string. This is where it is assembled.
# as "drink (modifiers, ...)".
modifiers = tool_call["args"]["modifiers"]
modifier_str = ", ".join(modifiers) if modifiers else "no modifiers"
order.append(f"{tool_call["args"]["drink"]} ({modifier_str}))
response = "\n".join(order)
elif tool_call["name"] == "confirm_order":
# We could entrust the LLM to do order confirmation, but it is a good practice to
# show the user the exact data that comprises their order so that what they confirm
# precisely matches the order that goes to the kitchen - avoiding hallucination
# or reality skew.
# In a real scenario, this is where you would connect your POS screen to show the
# order to the user.
print("Your order:")
if not order:
print(" (no items)")
for drink in order:
print(f" {drink}")
response = input("Is this correct?")
elif tool_call["name"] == "get_order":
response = "\n".join(order) if order else "(no order)"
elif tool_call["name"] == "clear_order":
order.clear()
response = None
elif tool_call["name"] == "place_order":
order_text = "\n".join(order)
print("Sending order to kitchen!")
print(order_text)
# TODO: Implement cafe.
order_placed = True
response = randint(1, 5) # ETA in minutes
else:
raise NotImplementedError(f"Unknown tool call: {tool_call["name"]}")
# Record the tool results as tool message.
outbound_msgs.append(
ToolMessage(
content=response,
name=tool_call["name"],
tool_call_id=tool_call["id"]
)
)
return {"messages": outbound_msgs, "order": order, "finished": order_placed}
def maybe_route_to_tools(state: OrderState) -> str:
"""Route between chat and tool nodes if a tool call is made."""
if not (msgs := state.get("messages", [])):
raise ValueError(f"No messages found when parsing state: {state}")
msg = msgs[-1]
if state.get("finished", False):
# When an order is placed, exit the app. The system instruction indicates
# that the chatbot should say thanks and goodbye at this point, so we can exit
# cleanly.
return END
elif hasattr(msg, "tool_calls") and len(msg.tool_calls) > 0:
# Route to 'tools' node for any automated tool calls first.
if any(
tool["name"] in tool_node.tools_by_name,keys() for tool in msg.tool_calls
):
return "tools"
else:
return "ordering"
else:
return "human"
# Define the graph so the LLM knows about the tools to invoke them.
# Auto-tools will be invoked automatically by the ToolNode
auto_tools = [get_menu]
tool_node = ToolNode(auto_tools)
# Order-tools will be handled by the order node.
order_tools = [add_to_order, confirm_order, get_order, clear_order, place_order]
# The LLM needs to know about all of the tools, so specify everything here
llm_with_tools = llm.bind_tools(auto_tools + order_tools)
graph_builder = StateGraph(OrderState)
# Nodes
graph_builder.add_node("chatbot", chatbot_with_tools)
graph_builder.add_node("human", human_node)
graph_builder.add_node("tools", tool_node)
graph_builder.add_node("ordering", order_node)
# Chatbot -> (ordering, tools, human, END)
graph_builder.add_conditional_edges("chatbot", maybe_route_to_tools)
# Human -> (chatbot, END)
graph_builder.add_conditional_edges("human", maybe_exit_human_node)
# Tools (both kinds) always route back to chat afterwards.
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge("ordering", "chatbot")
graph_builder.add_edge(START, "chatbot")
graph_with_order_tools = graph_builder.compile()
Image(graph_with_order_tools.get_graph().draw_mermaid.png())
# Uncomment this line to execute the graph:
state = graph_with_order_tools.invoke({"messages": []}, config)
# Things to try:
# - Order a drink!
# - Make a change to your order.
# - "Which teas are from England?"
# - Note that the graph should naturally exit after placing an order.
pprint(state)
# Uncomment this once you have run the graph from the previous cell.
pprint(state["order"])
Je veux penser, analyser et écrire en français. Je ne veux pas parle et pense en français depuis écrire en anglais. Alors, je le trouve plus facile à écrire en français le même temp. Avant aujourd’hui, j’ai parlé et pensé en français et analysé en anglais. À parler couramment, je dois analyser le français dans français.
French
English
Notes
Il neige beaucoup. L’avion ne peut pas partir, alors le vol est <annulé>
Annulé est un adjectif: annulée (fem. singulier), annulés (masc. pluriel)
Je déteste <la boxe>. C’est tellement ennuyeux.
<la boxe> c’est le sport.
Ce ne sont que des <hommes qui se battent> avec des gros <gants en cuir>
Battre est un verbe se conjuguant avec Avoir est qui deux hommes fait à boxer.
Ils veulent juste gagner <un trophée stupide>.
L’adjectif ‘trophée est masculin seulement.
Et après tu dis que la boxe, c’est ennuyeux?!
Cette phrase est amusante
J’en profite pour lire mon livre
Comme utiliser ‘en’. ‘En’ peut remplacer les expressions comme ‘de la soupe’. Et pour les quantités: ‘J’en ai six chaussures.’
Quel est le problème? Il a oublié de <mettre ses vêtements>.
Par exemple: Je mets ma chemise.
Vous servez <les repas> dans les chambres?
Le pluriel de ‘le repas’. Le mot ‘repas’ ne change pas en pluriel.
Ils sont allés à Paris pour le weekend
Il n’y a plus de pas
Elle mange tous les raisons et li n’y a plus de pas pour moi.
Il <est probablement> en train de dormir
L’adverbe ‘probablement’ est après le verbe qu’il modifie.
Tu as des timbres? Oui, j’en ai dix.
Utiliser ‘en’ avec un quantité
Ma sœur <économise de l’argent> et mon frère <en dépense beaucoup>.
Elle n’aime pas dépenser de l’argent. Il n’économise pas de l’argent. ‘En dépense beaucoup est comme dépense trop.
Tu as beaucoup de chaussures parce que <tu en achètes trop>.
‘Tu en achètes trop’: une bonne expression
Le portefeuille
Qu’on met de l’argent
Où est mon sac à main?
Il ne faut pas trop dépenser.
Ne dépense beaucoup d’argent
Il a beaucoup d’argent et <il aime en dépenser>
J’en ai
Tu as <des pièces de> deux euros ?
Tu as de l’argent mais tu en dépense seulement pour toi.
Ils ne dépensent pas leur argent, alors ils ont beaucoup à la banque.
J’ai commencé réfléchir à la structure des phrases. J’ai pensé à l’ordre des pronoms avant le verbe actif. Si, il y a plus d’un, je veux savoir lequel est plus proche le verbe actif. Maintenant, je sais que le pronom direct est avant le pronom indirect, les deux avant le verb. C’est important à savoir ça.
French
English
Notes
Elle ne veut pas <<aucune question>>
She doesn’t want <<any questions>>
Aucune is always followed by a singular noun
Cette phrase veut dire quoi?
What does this sentence mean?
This is a common saying to know
Nous avons des <<milliers des livres>>
We have <<thousands of books>>
Il m’demande lui envoyer un message
He asked me to send him a message
How to compose with direct and indirect pronouns
Tu me donnes quoi?
What are you giving me?
Also good to know
Je veux <<parler au directeur>> tout de suite
I want <<to talk to the director>> right away
Je viens de finir ma lettre. Super, je te donne une enveloppe tout de suite.
I just finished my letter. Super, I am giving an envelope to you right now
Où est le bureau de la directrice?
Where is the director’s office?
Vous <<écrivez à la directrice>>
You are writing to the director.
Antoine n’est pas là. Tu sais où il est? Non, attends. Je lui écris un e-mail.
Antoine isn’t here. Do you know where he is? No, wait. I am writing an email to him.
Je suis fatiguée et j’ai besoin d’un café. Je <vais faire une pause>
I am tired and I need a coffee. I <<am going to take a break>>.
Tu peux lui expliquer ça au téléphone
You can explain this by telephone to him(her).
The word order
Je veux ce document <sur papier>. J’ai besoin d’une imprimante.
I want this document <<on paper>>. I need a printout.
Est-ce qu’<il y an encore du paper> dans l’imprimante?
Is there paper still in the printer.
Ces employés sont nouveaux et je leur <montre les bureaux>.
These employees are new and I am <<showing the offices to them>>.
Nous leurs expliquons <comment marche l’imprimante>.
We are explaining <<how the printer works>> to them.
Sentence structure and word order
Hélène, si tu ne connais pas mon voisin célèbre, je peux <(te) le présenter>
Helen, if you don’t know my famous neighbor, I can <<introduce him (to you)>>.
Pronouns appear in a specific order. The first one after the verb immediately precedes the verb with the second one further to the left of that pronoun
Cette employée est nouvelle alors je lui présente les autres membres de l’équipe
This employee is new so I’m introducing the other team members to him.
J’ai commencé réfléchir à la structure des phrases. J’ai pensé à l’ordre des pronoms avant le verbe actif. Si, il y a plus d’un, je veux savoir lequel est plus proche le verbe actif. Maintenant, je sais que le pronom direct est avant le pronom indirect, les deux avant le verb. C’est important à savoir ça.
French
English
Notes
Elle ne veut pas <<aucune question>>
She doesn’t want <<any questions>>
Aucune is always followed by a singular noun
Cette phrase veut dire quoi?
What does this sentence mean?
This is a common saying to know
Nous avons des <<milliers des livres>>
We have <<thousands of books>>
Il m’demande lui envoyer un message
He asked me to send him a message
How to compose with direct and indirect pronouns
Tu me donnes quoi?
What are you giving me?
Also good to know
Je veux <<parler au directeur>> tout de suite
I want <<to talk to the director>> right away
Je viens de finir ma lettre. Super, je te donne une enveloppe tout de suite.
I just finished my letter. Super, I am giving an envelope to you right now
Où est le bureau de la directrice?
Where is the director’s office?
Vous <<écrivez à la directrice>>
You are writing to the director.
Antoine n’est pas là. Tu sais où il est? Non, attends. Je lui écris un e-mail.
Antoine isn’t here. Do you know where he is? No, wait. I am writing an email to him.
Je suis fatiguée et j’ai besoin d’un café. Je <vais faire une pause>
I am tired and I need a coffee. I <<am going to take a break>>.
Tu peux lui expliquer ça au téléphone
You can explain this by telephone to him(her).
The word order
Je veux ce document <sur papier>. J’ai besoin d’une imprimante.
I want this document <<on paper>>. I need a printout.
Est-ce qu’<il y an encore du paper> dans l’imprimante?
Is there paper still in the printer.
Ces employés sont nouveaux et je leur <montre les bureaux>.
These employees are new and I am <<showing the offices to them>>.
Nous leurs expliquons <comment marche l’imprimante>.
We are explaining <<how the printer works>> to them.
Sentence structure and word order
Hélène, si tu ne connais pas mon voisin célèbre, je peux <(te) le présenter>
Helen, if you don’t know my famous neighbor, I can <<introduce him (to you)>>.
Pronouns appear in a specific order. The first one after the verb immediately precedes the verb with the second one further to the left of that pronoun
Cette employée est nouvelle alors je lui présente les autres membres de l’équipe
This employee is new so I’m introducing the other team members to him.
When learning a new language, as I’m currently pursuing with French, once you get past the basics of conjugation, grammar and pronunciation, becoming fluent requires a deeper understanding of the language. It is necessary to not only know how to speak the language but also know the nuances of a langauge including common usage patterns, useful related phrases, and linguistic and historical facts. How certain phrases are commonly use. What does it sound like spoken versus written. What language rules cause a change in spelling and word groupings. These are all insights that would be helpful to me as a second language learner to have when studying a language.
For example, in French the phrase “pas de” doesn’t change even if it is followed by a plural noun. It is used as “pas de viandes” as well as “pas de fromage”. This and other similar examples enrich and enhance french-language learning. A language learner has to be able to take notes on these variations and practice them.
Goals
During my studies, I often have to use multiple websites and apps to get the additional information I need to understand the grammar, spelling, and language rules to name a few. Often, It’s more helpful to get additional details and examples.
The goal of my project was to create a French language advisor to help me achieve my goal of being fluent in the French language. Currently, I am working to be fluent in French and I’m currently at CEFR 43 where I can read, write and speak about every things. At this point in my learning, it’s important to understand and explore the language nuances to foster thinking “in French”, versus just learning grammar, vocabulary, and conjugations.
I wanted to see if a chatbot could eliminate the need for multiple apps and websites to get the information I need.
The goals were to:
Create a model that can use an agent, Google Search grounding, and retrieval augmented generation (RAG) to supplement and support my French language studies
Have the model offer suggestions, ideas, examples and interesting information and facts about the French language
Include something interesting, funny and/or popular culture references
Features
The primary feature of the model was to give me the ability to ask any question and receive results that may refer to embedded texts (RAG) or Google search results. I needed the ability to input a word, phrase or sentence in French, and receive an appropriate response, preferably in French.
Chatbot features
Get interesting French language details including grammar, composition, and spelling with Google search grounding.
Create a domain-specific LLM: Add French language notes and documents
Use existing texts as input for RAG to reference French textbooks and documents
LangGraph to manage the conversation and its details
Implementation
This solution leverages LangGraph, Gemini’s Google Search, RAG and functions. The steps used to implement this solution are outlined here. The initial stages of my project were to create a basic graph that included Google searches, client instantiation and creating embeddings. Once that was tested then add and integrate RAG to provide a richer learning experience.
Prepare Data: The PDFs for RAG are French-language textbooks that were imported then converted to Documents (object type) before embedding them using Google’s models/text-embedding-004. Once the embeddings were created, they were stored in a Chroma vectorstore database.
defcreate_embeddings(text:str)-> List[float]:"""Create embeddings for a given text.""" response = client.models.embed_content(model=EMBEDDING_MODEL,contents=text,config=types.EmbedContentConfig(task_type="semantic_similarity"),)return response.embeddings
Build Agents & Functions: The chatbot consists of LangGraph nodes, functions, conditional edges and edges as seen in this mermaid graph.
Search grounding: Google search is used to return augmented results such as common usage examples and conjugations. The wrapper used is ChatGoogleGenerativeAI. It’s part of the langchain_google_genai package. It was configured with temperature and top_p.
LangGraph Nodes: This was the most challenging for me as I had to first learn LangGraph beyond the codelabs to understand it. Once I understood, I was able to create and sequence the nodes and edges to get the model behavior and output that I wanted.
# Create and display the graphworkflow.add_node("chatbot", chatbot)workflow.add_node("human", human_node)workflow.add_node("tools", tools_node)workflow.add_node("language_details", language_details_node)# Chatbot may go to tools, or human.workflow.add_conditional_edges("chatbot", maybe_route_to_tools)# Human may go back to chatbot, or exit.workflow.add_conditional_edges("human", maybe_exit_human_node)# Define chatbot node edgesworkflow.add_edge("language_details","chatbot")# Set graph start and compileworkflow.add_edge(START,"chatbot")workflow_with_translation_tools = workflow.compile()
Reflection and Learning
Prior to participating in the Google Gen AI 5-Day Intensive, I was unfamiliar with LangGraph, RAG, and agents. Now, I understand the flexibility and promise of LangGraph, RAG and agentic LLMs to allow models to be more flexible and relevant.
Some of what I’ve had to learn are following:
The types Gemini models I can use for embedding and the chatbot. In this case, I used the text-embedding-004 embedding model without experiencing any issues.
How to configure chatbot and embedding clients. Both clients were easy to configure and were set once and done.
How Google AI can be used in a LangGraph graph. I used two major packages, GoogleGenerativeAIEmbeddings and ChatGoogleGenerativeAI. Using either package was easy because the API and code documentaion was very good.
The difference between conditional edges and edges and how to create them. This was a bit tricky to decide what should be a tool versus a function for a conditional edge, versus a node. I also had to consider which logic should be place where and how the output should be handled. It took several graph iterations to get almost the way I like it. For the purposes of the capstone, it works as I intended.
How to import and extract text from PDFs. Importing and extracting text from PDF files was easy, as long as the PDF contained actual text and not images. To keep the scope of the project within my timeframe, I only wanted to work with PDFs that contained text.
How to embed documents for RAG contexts. Document embedding took several minutes for three decent-sized PDFs. It was the most time-consuming part of testing. It’s easy to see where as the set of documents I embed grows, more time and resources will be required.
Creating a Chroma vectorstore database. Creating and retrieving the vectorstore contents was fairly straightforward. It worked consistently locally and in the Kaggle Notebook.
Creating, updating and accessing a graph’s State object. I would have liked to have more time to master the state object but I was able to understand enough to make it work for my project. I would have liked to customize the state more but didn’t have the time to do so. I did find it to be a convenient resource to access messages no matter where it was invoked in the graph.
Create multiple tools and nodes to be used in the graph. I knew what tools I wanted to include based on the resources and tools I currently use when studying French. The goal was to consolidate the information I receive from multiple sources into a single one. But, there are other tools that would enrich my learning. For example, the ability to get rich text and image responses.
Using a prompt template to create and use basic prompts. I didn’t get as much time to investigate creating and using PromptTemplates. I know they could be useful for managing and transforming user input and I will be exploring them further beyond this project.
In addition, since I wrote the code locally in Zed, it took some time and effort to transform the code for a Kaggle notebook. One of the issues I had was that the dependencies initially installed in Kaggle’s environment caused import and dependency resolution errors. It took some time to make sure that the dependencies that worked locally were the same on Kaggle. In hindsight, I’m glad I developed it locally versus starting in a notebook
The Bigger Picture
Thinking beyond this capstone project, language learners need a wise companion to provide information and guidance. The companion would be a language note-taking app with an AI companion that would make suggestions about language rules, common usages, examples and comparisons. While taking notes, or when asked a question, an AI companion would be displayed in a sidebar with context rich details, and unique information and historical facts. This way, as they learn, they get to consider and contrast what they’ve learned. This would create a richer learning environment to motivate and encourage fluency. It can include images too.
This project is about taking the first steps to creating such a companion that will use the keywords, phrases and sentences I input to give me more context and nuance about it’s meaning, common usage patterns and interesting cultural and historical facts. It’s about going beyond the basics to to become immersed in French, especially if you don’t get to live in or visit a francophone country.
Ultimately, it can be developed into an AI-enabled, context-aware application. The AI integrated into the app would be a wise language advisor to help me learn the language. This would be similar to what happens currently with AI-enabled coding apps such as Zed, but more engaging. The AI assistant would remain aware of what I’m typing and offer context-rich suggestions. It would also allow the user to provide instruction at the start of the session for what their learning target is. Once the AI has this, it can then provide a richer learning experience.
This chatbot is the first step in realizing this vision. I can now input anything in french get a variety of details about it, including making additional requests for grammar, gender, and popular culture details.
The next steps for my French language assistant is to continue refine and update the graph, then create a user-friendly interface, before considering an app or website.
Je n’ai rien appris beaucoup de nouveaux mots aujourd’hui, mais j’ai appris quelque choses de grammaire. Je sais maintenant que l’ordre de plusieurs adjectifs avant le nom est très complexe. Il n’y pas des règles précises quand il y a plusieurs adjectifs. J’ai aussi appris comment utiliser un adverbe dans le passé composé.
Quand j’étais petite, j’ai écouté mes cousins parlent en francais. Depuis, j’ai adoré le langue est j’ai pris français à l’université. j’ai regardé aussi French In Action, une serie de PBS. J’ai visité une fois en 1999 à Paris.
J’adore la langue et la nourriture, et je veux visiter encore est habiter là. Avant, je veux apprendre français si je peux le parler courament. Je voudrai écrire et parle le français plus mieux qu’aujourd’hui.
Si maintenant, j’étudie le français tous les jours. J’utilise Duolingo et mes notes à l’apprendre. Je le recherche aussi les mots et grammaire avec sites web comme Lawless French, Reverso et French Language Stack Exchange.
Ici, je veux à partager mes notes de la langue parce qu’ils contiennent mes questions sur les leçons. Depuis, je peux les chercher quand je veux.
Codelab 2/2 from day one. The code lab is here in Kaggle. Download the code on Github to run it locally.
"""
Prompting
Google Gen AI 5-Day Intensive Course
Host: Kaggle
Day: 1
Codelab: https://www.kaggle.com/code/reniseblack/day-1-evaluation-and-structured-output/edit
"""
import enum
import io
import os
from pprint import pprint
import typing_extensions as typing
from google import genai
from google.api_core import retry
from google.genai import types
from IPython.display import Markdown, clear_output, display
is_retriable = lambda e: ( # noqa: E731
isinstance(e, genai.errors.APIError) and e.code in {429, 503})
genai.models.Models.generate_content = retry.Retry(
predicate=is_retriable)(genai.models.Models.generate_content)
client = genai.Client(api_key=os.environ["GOOGLE_API_KEY"])
# Create and send a single-turn prompt
MODEL = "gemini-2.0-flash"
prompt = "Explain AI to me like I'm a kid."
response = client.models.generate_content(
model=MODEL,
contents=prompt
)
# Get the response text then render as HTML (optional)
response_text = response.text
print("Model response: \n\n", Markdown(response.text))
section_break = "----------------------------"
print(section_break, "\n\nStarting a multi-turn chat ...\n\n")
# Start a multi-turn chat
chat_message = "Hello! My name is Zlork."
chat = client.chats.create(model=MODEL, history=[])
response = chat.send_message(chat_message)
print(response.text)
chat_message2 = "Can you tell me something interesting about dinosaurs?"
response = chat.send_message(chat_message2)
print(response.text)
chat_message3 = "Do you remember my name?"
response = chat.send_message(chat_message3)
print(response.text)
# See the list of available models
print("\n\nGetting a list of available models ... \n\n")
for model in client.models.list():
print(model.name)
print("\n\nGetting a list of gemini-2.0-flash models ... \n\n")
for model in client.models.list():
if model.name == 'models/gemini-2.0-flash':
pprint(model.to_json_dict())
break
# Set token limit to restrict output length
print("\n\nSetting a token limit ... \n\n")
short_config = types.GenerateContentConfig(max_output_tokens=200)
PROMPT_2 = ('Write a 1000 word essay on the importance of olives in modern '
'society.')
response = client.models.generate_content(
model=MODEL,
config=short_config,
contents=PROMPT_2
)
print(response.text)
print("\n\n")
PROMPT_3 = "Write a short poem on the importance of olives in modern society."
# config is where you specify top_k, temperature, etc
response = client.models.generate_content(
model=MODEL,
config=short_config,
contents=PROMPT_3
)
print(response.text)
print("\n\n")
short_config = types.GenerateContentConfig(max_output_tokens=100)
response = client.models.generate_content(
model=MODEL,
config=short_config,
contents='Write a 150 opinion about the importance of pine trees.')
print(response.text)
print("\n\n")
print("\n\nSetting a temperature limit ... \n\n")
# Set a temperature that helps manage the randomness
high_temp_config = types.GenerateContentConfig(temperature=2.0)
PROMPT_4 = "Pick a random colour... (respond in a single word)"
for _ in range (5):
response = client.models.generate_content(
model=MODEL,
config=high_temp_config,
contents = PROMPT_4
)
if response.text:
print(response.text, "-" * 25)
print("\n\n")
print("\n\nSetting a low temperature ... \n\n")
# Setting a low temperature
low_temp_config = types.GenerateContentConfig(temperature=0.0)
for _ in range(5):
response = client.models.generate_content(
model=MODEL,
config=low_temp_config,
contents = PROMPT_4
)
if response.text:
print(response.text, "-" * 25)
print("\n\n")
print("\n\nSetting top-p ... \n\n")
# Setting top-p that also controls output diversity
# It will determine when to stop selecting tokens that are least probable
TEMPERATURE_2 = 1.0
TOP_P1 = 0.05
model_config = types.GenerateContentConfig(
# These are default gemini-2.0-flash values
temperature=TEMPERATURE_2,
top_p=TOP_P1
)
STORY_PROMPT = ("You are a creative writer. Write a short story about a cat"\
"who goes on an adventure.")
response = client.models.generate_content(
model=MODEL,
config=model_config,
contents=STORY_PROMPT
)
print(response.text)
print("\n\n")
print("\n\nSetting top-p ... \n\n")
# Create a zero-shot prompt
TEMPERATURE_2 = 0.1
TOP_P2 = 1
MAX_OUTPUT_TOKENS = 5
model_config = types.GenerateContentConfig(
temperature=TEMPERATURE_2,
top_p=TOP_P2,
max_output_tokens=MAX_OUTPUT_TOKENS,
)
ZERO_SHOT_PROMPT = """Classify movie reviews as POSITIVE, NEUTRAL or NEGATIVE.
Review: "Her" is a disturbing study revealing the direction
humanity is headed if AI is allowed to keep evolving,
unchecked. I wish there were more movies like this masterpiece.
Sentiment: """
response = client.models.generate_content(
model=MODEL,
config=model_config,
contents=ZERO_SHOT_PROMPT)
print(response.text)
print("\n\n")
print("\n\nSetting top-p ... \n\n")
# Setting enum mode
class Sentiment(enum.Enum):
POSITIVE = "right on"
NEUTRAL = "meh"
NEGATIVE = "sucks lemons"
ENUM_TYPE = "text/x.enum"
response = client.models.generate_content(
model=MODEL,
config=types.GenerateContentConfig(
response_mime_type=ENUM_TYPE,
response_schema=Sentiment
),
contents=ZERO_SHOT_PROMPT)
print(response.text)
print("\n\n")
print("\n\nSetting top-p ... \n\n")
# Get the Sentiment returned as a Python object
enum_response = response.parsed
print(enum_response)
print(type(enum_response))
print("\n\n")
print("\n\nSetting top-p ... \n\n")
# Send one-shot and few-shot prompts
FEW_SHOT_PROMPT = """Parse a customer's pizza order into valid JSON:
EXAMPLE:
I want a small pizza with cheese, tomato sauce, and pepperoni.
JSON Response:
```
{
"size": "small",
"type": "normal",
"ingredients": ["cheese", "tomato sauce", "pepperoni"]
}
```
EXAMPLE:
Can I get a large pizza with tomato sauce, basil and mozzarella
JSON Response:
```
{
"size": "large",
"type": "normal",
"ingredients": ["tomato sauce", "basil", "mozzarella"]
}
```
ORDER:
"""
CUSTOMER_ORDER = "Give me a large with cheese & pineapple"
MAX_OUTPUT_TOKENS = 250
response = client.models.generate_content(
model='gemini-2.0-flash',
config=types.GenerateContentConfig(
temperature=TEMPERATURE_2,
top_p=TOP_P2,
max_output_tokens=MAX_OUTPUT_TOKENS,
),
contents=[FEW_SHOT_PROMPT, CUSTOMER_ORDER])
print(response.text)
print("\n\n")
print("\n\nSet to JSON response only ... \n\n")
# Set output to JSON format
class PizzaOrder(typing.TypedDict):
size: str
ingredients: list[str]
type: str
ORDER_1 = "Can I have a large dessert pizza with apple and chocolate"
response = client.models.generate_content(
model=MODEL,
config=types.GenerateContentConfig(
temperature=TEMPERATURE_2,
response_mime_type="application/json",
response_schema=PizzaOrder
),
contents=ORDER_1
)
print(response.text)
print("\n\n")
print("\n\nUse Chain of Thought (CoT) ... \n\n")
# Use Chain of Thought (CoT) to show reasoning to check hallucinations
CHAIN_OF_THOUGHT_PROMPT = """When I was 4 years old,
my partner was 3 times my age. Now, I am 20 years old.
How old is my partner? Return the answer directly."""
response = client.models.generate_content(
model=MODEL,
contents=CHAIN_OF_THOUGHT_PROMPT
)
print(response.text)
print("\n\n")
print("\n\nUse Chain of Thought (CoT) with step-by-step ... \n\n")
# Use Chain of Thought (CoT) to show reasoning to check hallucinations
CHAIN_OF_THOUGHT_PROMPT_2 = """When I was 4 years old, my partner was 3 times
my age. Now, I am 20 years old. How old is my partner?
Let's think step by step."""
response = client.models.generate_content(
model=MODEL,
contents=CHAIN_OF_THOUGHT_PROMPT_2
)
Markdown(response.text)
print("\n\n")
print("\n\nReAct prompt examples ... \n\n")
# Examples of ReAct prompts
model_instructions = """
Solve a question answering task with interleaving Thought, Action, Observation
steps. Thought can reason about the current situation,
Observation is understanding relevant information from an Action's output and
Action can be one of three types:
(1) <search>entity</search>, which searches the exact entity on Wikipedia and
returns the first paragraph if it exists. If not, it will return some
similar entities to search and you can try to search the information from
those topics.
(2) <lookup>keyword</lookup>, which returns the next sentence containing
keyword in the current context. This only does exact matches, so keep your
searches short.
(3) <finish>answer</finish>, which returns the answer and finishes the task.
"""
example1 = """Question
Musician and satirist Allie Goertz wrote a song about the "The Simpsons" ch\
aracter Milhouse, who Matt Groening named after who?
Thought 1
The question simplifies to "The Simpsons" character Milhouse is named after \
who. I only need to search Milhouse and find who it is named after.
Action 1
<search>Milhouse</search>
Observation 1
Milhouse Mussolini Van Houten is a recurring character in the Fox animated \
television series The Simpsons voiced by Pamela Hayden and
created by Matt Groening.
Thought 2
The paragraph does not tell who Milhouse is named after, maybe I can look u\
p "named after".
Action 2
<lookup>named after</lookup>
Observation 2
Milhouse was named after U.S. president Richard Nixon, whose middle name wa\
s Milhous.
Thought 3
Milhouse was named after U.S. president Richard Nixon, so the answer is Ric\
hard Nixon.
Action 3
<finish>Richard Nixon</finish>
"""
example2 = """Question
What is the elevation range for the area that the eastern sector of the
Colorado orogeny extends into?
Thought 1
I need to search Colorado orogeny, find the area that the eastern sector
of the Colorado orogeny extends into, then find the elevation range
of the area.
Action 1
<search>Colorado orogeny</search>
Observation 1
The Colorado orogeny was an episode of mountain building (an orogeny) in Colora
do and surrounding areas.
Thought 2
It does not mention the eastern sector. So I need to look up eastern sector.
Action 2
<lookup>eastern sector</lookup>
Observation 2
The eastern sector extends into the High Plains and is called the Central Plain
s orogeny.
Thought 3
The eastern sector of Colorado orogeny extends into the High Plains. So I need
to search High Plains and find its elevation range.
Action 3
<search>High Plains</search>
Observation 3
High Plains refers to one of two distinct land regions
Thought 4
I need to instead search High Plains (United States).
Action 4
<search>High Plains (United States)</search>
Observation 4
The High Plains are a subregion of the Great Plains. From east to west, the
High Plains rise in elevation from around 1,800 to 7,000 ft (550 to 2,130m).
Thought 5
High Plains rise in elevation from around 1,800 to 7,000 ft, so the answer is 1
,800 to 7,000 ft.
Action 5
<finish>1,800 to 7,000 ft</finish>
"""
question = """Question
Who was the youngest author listed on the transformers NLP paper?
"""
# You will perform the Action; so generate up to, but not including,
# the Observation.
react_config = types.GenerateContentConfig(
stop_sequences=["\nObservation"],
system_instruction=model_instructions + example1 + example2,
)
# Create a chat that has the model instructions and examples pre-seeded.
react_chat = client.chats.create(
model=MODEL,
config=react_config,
)
response = react_chat.send_message(question)
print(response.text)
print("\n\n")
print("\n\nAdding an observation with author names ... \n\n")
# The model won't find the authors, so supply them with a followup
observation = """Observation 1
[1706.03762] Attention Is All You Need
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan
N. Gomez, Lukasz Kaiser, Illia Polosukhin
We propose a new simple network architecture, the Transformer, based solely on
attention mechanisms, dispensing with recurrence and convolutions entirely.
"""
response = react_chat.send_message(observation)
print(response.text)
print("\n\n")
print("\n\nUse Gemini 2.0 Flash Thinking Mode ... \n\n")
# Examples of ReAct prompts
THINKING_MODE_MODEL = "gemini-2.0-flash-thinking-exp"
THINKING_MODE_PROMPT = ("Who was the youngest author listed on the "
"transformers NLP paper?")
response = client.models.generate_content_stream(
model=THINKING_MODE_MODEL,
contents=THINKING_MODE_PROMPT
)
buffer = io.StringIO()
for chunk in response:
buffer.write(str(chunk.text))
# Display the response as it is streamed
print(chunk.text, end='')
# Render the response as formatted markdown
clear_output()
Markdown(buffer.getvalue())
print("\n\n")
print("\n\nUse Gemini 2.0 to generate code ... \n\n")
# Generate code wth Gemini
# The Gemini models love to talk, so it helps to specify they stick to the code
# if that is all that you want.
CODE_PROMPT = """Write a Python function to calculate the factorial of a number.
No explanation, provide only the code."""
response = client.models.generate_content(
model=MODEL,
config=types.GenerateContentConfig(
temperature=1,
top_p=1,
max_output_tokens=1024,
),
contents=CODE_PROMPT)
Markdown(response.text)
print("\n\n")
print("\n\nUse Gemini 2.0 to execute code ... \n\n")
# Execute code wth Gemini
config = types.GenerateContentConfig(
tools=[types.Tool(code_execution=types.ToolCodeExecution())],
)
code_exec_prompt = """
Generate the first 14 odd prime numbers, then calculate their sum.
"""
response = client.models.generate_content(
model=MODEL,
config=config,
contents=code_exec_prompt)
for part in response.candidates[0].content.parts:
pprint(part.to_json_dict())
print("-----")
print("\n\n")
print("\n\nSee more parts of the response ... \n\n")
# See executable_code and code_execution_result
for part in response.candidates[0].content.parts:
if part.text:
display(Markdown(part.text))
elif part.executable_code:
display(Markdown(f'```python\n{part.executable_code.code}\n```'))
elif part.code_execution_result:
if part.code_execution_result.outcome != 'OUTCOME_OK':
display(Markdown(f'## Status {part.code_execution_result.outcome}'))
display(Markdown(f'```\n{part.code_execution_result.output}\n```'))
print("\n\n")
print("\n\nExplaining code ... \n\n")
# Get an explanation for code
# file_contents = !curl https://raw.githubusercontent.com/magicmonty/bash-git-prompt/refs/heads/master/gitprompt.sh
explain_prompt = f"""
Please explain what this file does at a very high level. What is it,
and why would I use it?
```
{file_contents}
```
"""
response = client.models.generate_content(
model='gemini-2.0-flash',
contents=explain_prompt)
Markdown(response.text)