Bardapi
Last updated
Last updated
Google Bard Cookies
pip install bardapi
F12 (DevTools)
Application
Storage -> Cookies -> https://bard.google.com -> Search "1PSID" -> Cookie Value
__Secure-1PSID
__Secure-1PSIDTS
__Secure-1PSIDCC
pip install bardapi python-dotenv
SECURE_1PSIDTS=<__Secure-1PSIDTS>
SECURE_1PSID=<__Secure-1PSID>
SECURE_1PSIDCC=<__Secure-1PSIDCC>
import os
from bardapi import BardCookies
from dotenv import load_dotenv
load_dotenv(override=True)
cookie_dict = {
"__Secure-1PSID": os.getenv('SECURE_1PSID'),
"__Secure-1PSIDTS": os.getenv('SECURE_1PSIDTS'),
"__Secure-1PSIDCC": os.getenv('SECURE_1PSIDCC')
}
bard = BardCookies(cookie_dict=cookie_dict, timeout=20, run_code=False, conversation_id=None, language=None)
import os
import requests
from bardapi import Bard, SESSION_HEADERS
from dotenv import load_dotenv
load_dotenv(override=True)
session = requests.Session()
session.cookies.set("__Secure-1PSID", os.getenv('SECURE_1PSID'))
session.cookies.set("__Secure-1PSIDCC", os.getenv('SECURE_1PSIDCC'))
session.cookies.set("__Secure-1PSIDTS", os.getenv('SECURE_1PSIDTS'))
session.headers = SESSION_HEADERS
bard = Bard(session=session, timeout=20, run_code=False, conversation_id=None, language=None)
prompt='''
How to learn English language?
The answer format should be only dictionary format that able to convert with json.loads() in python
{
[
id: 1,
content: draft1
],
id: 2,
content: draft2
]
}
'''
bard_answer = bard.get_answer(prompt)
print(bard_answer)
{
"choices": [
{ "content": ["..."], "id": "..." },
{ "content": ["..."], "id": "..." },
{ "content": ["..."], "id": "..." }
],
"code": None,
"content": "[]",
"conversation_id": "",
"factuality_queries": None,
"images": [],
"links": [],
"program_lang": None,
"response_id": "",
"status_code": 200,
"text_query": ["how to learn English language", 1]
}
import json
answer_dict = json.loads(bard_answer['content'])
answer_dict
[
{
"id": 1,
"title": "Immerse yourself in the language",
"description": "Surround yourself with English as much as possible. Listen to English music, watch English movies and TV shows, and read English books and articles. This will help you to get a feel for the language and improve your comprehension skills."
},
{
"id": 2,
"title": "Practice speaking English",
"description": "The best way to improve your speaking skills is to practice as often as possible. Find a language partner, join an English conversation group, or take online speaking classes. The more you speak English, the more comfortable you will become with it."
},
{
"id": 3,
"title": "Focus on grammar and vocabulary",
"description": "Learning grammar and vocabulary is essential for mastering any language. There are many resources available to help you learn English grammar and vocabulary, such as online courses, textbooks, and apps."
},
{
"id": 4,
"title": "Don't be afraid to make mistakes",
"description": "Everyone makes mistakes when they are learning a new language. Don't let this discourage you. The important thing is to keep practicing and not give up."
},
{
"id": 5,
"title": "Make learning fun",
"description": "Learning a new language should be enjoyable. Find ways to make learning English fun, such as watching English comedies or playing English-language games."
}
]
import pandas as pd
df = pd.DataFrame(answer_dict)
df
id title description
1 Immerse yourself in the language Surround yourself with English as much as poss...
2 Practice speaking English The best way to improve your speaking skills i...
3 Focus on grammar and vocabulary Learning grammar and vocabulary is essential f...
4 Don't be afraid to make mistakes Everyone makes mistakes when they are learning...
5 Make learning fun Learning a new language should be enjoyable. F...
image_path = './img.webp'
with open(image_path, 'rb') as f:
image = f.read()
bard_answer = bard.ask_about_image('What is in the image?', image)
print(bard_answer['content'])
The image you sent is a diagram of a worker node in Apache Spark. A worker node is a machine that runs the Spark executors, which are responsible for executing the tasks that make up a Spark application.
The diagram shows the following components of a worker node:
* **Driver program:** The driver program is the main program that submits the Spark application to the cluster. It uses the SparkContext to communicate with the cluster manager and the executors.
* **Cluster manager:** The cluster manager is responsible for managing the worker nodes and scheduling the tasks to be executed on them.
* **Executor:** An executor is a process that runs on a worker node and executes the tasks that are assigned to it.
* **Cache:** The cache is a local storage area on the worker node where executors can store data that is frequently used, such as intermediate results.
The diagram also shows the flow of data between the different components. When the driver program submits an application to the cluster, the cluster manager assigns the tasks to the executors on the worker nodes. The executors then execute the tasks and return the results to the driver program.
The cache can be used to improve the performance of Spark applications by reducing the amount of data that needs to be transferred between the worker nodes and the driver program. For example, if an executor is working on a task that requires the same data as a previous task, the executor can get the data from the cache instead of having to download it from the driver program.
Here is a more detailed explanation of the different components in the diagram:
* **Driver program:** The driver program is the main program that submits the Spark application to the cluster. It is responsible for creating the SparkContext, which is the object that is used to communicate with the cluster manager and the executors. The driver program also creates the Spark jobs, which are the units of work that are executed by the executors.
* **Cluster manager:** The cluster manager is responsible for managing the worker nodes and scheduling the tasks to be executed on them. It also monitors the health of the worker nodes and restarts them if they fail.
* **Executor:** An executor is a process that runs on a worker node and executes the tasks that are assigned to it. Executors are created and managed by the cluster manager. When an executor starts up, it registers with the cluster manager and waits for tasks to be assigned to it. When an executor is assigned a task, it downloads the necessary data from the driver program or the cache, executes the task, and returns the results to the driver program.
* **Cache:** The cache is a local storage area on the worker node where executors can store data that is frequently used, such as intermediate results. The cache can be used to improve the performance of Spark applications by reducing the amount of data that needs to be transferred between the worker nodes and the driver program.
I hope this explanation is helpful.
audio = bard.speech('Hello, How was your day?', lang='en-US')
with open("speech.ogg", "wb") as f:
f.write(bytes(audio['audio']))
Exception: SNlM0e value not found. Double-check __Secure-1PSID value or pass it as token='xxxxx'.
Solution: Refresh browser, get cookie again and then rerun code