0%

keywords

  • 提示内容写入 ➡️

  • 下一个提示 option + ],上一个提示 option + [

  • 接收一行command + 右方向 ,多行 control/fn + command + 右方向

命令

  • 查看卡的数量

    nvidia-smi -L

  • 查看显卡上运行的程序,以及所占内存情况

    nvidia-smi --query-compute-apps=pid,process_name,used_memory,gpu_uuid --format=csv

  • 查看显卡所剩内存

    nvidia-smi --query-gpu=memory.free --format=csv,noheader,nounits

model_validator

model_validator 是 Pydantic 库提供的一个装饰器,用于定义模型验证器函数。模型验证器函数是一种特殊的方法,用于在创建模型实例或更新模型属性时执行自定义的验证逻辑。

1
2
3
4
5
@model_validator(mode="after")
def validate_engine_and_run_func(self):
if self.search_engine is None:
self.search_engine = SearchEngine.from_search_config(self.config.search, proxy=self.config.proxy)
return self

extension

cake

  • 庆祝彩纸

    1
    confetti
  • 弹跳窗口

    1
    Toggle Bounce Animation

UI

request

conversion

  • request

    1
    curl POST https://chat.openai.com/backend-api/conversation
  • payload

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
{
"action": "next",
"messages": [
{
"id": "aaa25811-d62c-4f23-963a-77c6eb90b899",
"author": {
"role": "user"
},
"content": {
"content_type": "text",
"parts": [
"你是如何带上上下文进行问答的,用python代码说下示例"
]
},
"metadata": {}
}
],
"conversation_id": "0e02cc29-8cc2-4305-91a1-a3db1427f400",
"parent_message_id": "36d4ff94-fb3f-4f9b-beeb-d9c93f5752fe",
"model": "text-davinci-002-render-sha",
"timezone_offset_min": -480,
"suggestions": [],
"history_and_training_disabled": false,
"arkose_token": null,
"conversation_mode": {
"kind": "primary_assistant"
},
"force_paragen": false,
"force_rate_limit": false
}

ces/v1/t

  • request

    1
    https://chat.openai.com/ces/v1/t
  • payload

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    {
    "timestamp": "2024-01-19T08:35:53.805Z",
    "integrations": {
    "Segment.io": true
    },
    "userId": "user-EyJTf0sEV6sYCFhgOwqCHGD2",
    "anonymousId": "dafc8561-54d1-4cb8-9cdf-91e5e4f7162e",
    "event": "View Template Prompt Ignore Suggestions",
    "type": "track",
    "properties": {
    "origin": "chat",
    "openai_app": "API"
    },
    "context": {
    "page": {
    "path": "/c/0e02cc29-8cc2-4305-91a1-a3db1427f400",
    "referrer": "",
    "search": "",
    "title": "feel",
    "url": "https://chat.openai.com/c/0e02cc29-8cc2-4305-91a1-a3db1427f400"
    },
    "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36",
    "locale": "zh-CN",
    "library": {
    "name": "analytics.js",
    "version": "npm:next-1.56.0"
    },
    "userAgentData": {
    "brands": [
    {
    "brand": "Not_A Brand",
    "version": "8"
    },
    {
    "brand": "Chromium",
    "version": "120"
    },
    {
    "brand": "Google Chrome",
    "version": "120"
    }
    ],
    "mobile": false,
    "platform": "macOS"
    }
    },
    "messageId": "ajs-next-c0f7b6265ed3eab440b0d44f21f53ffa",
    "sentAt": "2024-01-19T08:35:54.068Z",
    "_metadata": {
    "bundled": [
    "Segment.io"
    ],
    "unbundled": [],
    "bundledIds": []
    }
    }

rgstr

  • request

    1
    https://events.statsigapi.net/v1/rgstr
  • payload

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    {
    "events": [
    {
    "eventName": "chatgpt_prompt_ignore_suggestions",
    "user": {
    "userID": "user-EyJTf0sEV6sYCFhgOwqCHGD2",
    "custom": {
    "is_paid": false,
    "workspace_id": "b27c1f77-bb94-4e8f-a6d3-4b30a0eec927"
    },
    "locale": "zh-CN",
    "statsigEnvironment": {
    "tier": "production"
    }
    },
    "value": null,
    "metadata": null,
    "time": 1705653353804,
    "statsigMetadata": {
    "currentPage": "https://chat.openai.com/c/0e02cc29-8cc2-4305-91a1-a3db1427f400"
    }
    }
    ],
    "statsigMetadata": {
    "sdkType": "js-client",
    "sdkVersion": "4.39.3",
    "stableID": "f0eaa0cd-f15f-4d71-ab51-8a91e92aea53"
    }
    }

backend/lat/r

  • request

    1
    https://chat.openai.com/backend-api/lat/r
  • payload

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    {
    "server_request_id": "847dbf366a92408c-SIN",
    "model": "text-davinci-002-render-sha",
    "preflight_time_ms": 0.09999990463256836,
    "count_tokens": 472,
    "ts_first_token_ms": 1548.7000000476837,
    "ts_max_token_time_ms": 764,
    "ts_mean_token_without_first_ms": 30.007855625922005,
    "ts_median_token_without_first_ms": 21.799999952316284,
    "ts_min_token_time_ms": 0.09999990463256836,
    "ts_p95_token_without_first_ms": 88.99999988079071,
    "ts_p99_token_without_first_ms": 139.65999991893773,
    "ts_std_dev_token_ms": 44.98320283392762,
    "ts_total_request_ms": 15695.399999856949
    }

API

client.chat.completions.create

  • request

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    response = client.chat.completions.create(
    model="gpt-3.5-turbo-1106",
    messages=[
    {
    "role": "system",
    "content": prompt_str,
    },
    {
    "role": "user",
    "content": user_input.question,
    },
    ],
    )
  • response

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    {
    "id": "chatcmpl-8ify2KU1ANzQPQyD8WVMp6rZWgYtj",
    "choices": [
    {
    "finish_reason": "stop",
    "index": 0,
    "logprobs": null,
    "message": {
    "content": "",
    "role": "assistant",
    "function_call": null,
    "tool_calls": null
    }
    }
    ],
    "created": 1705658446,
    "model": "gpt-3.5-turbo-1106",
    "object": "chat.completion",
    "system_fingerprint": "fp_c596c86df9",
    "usage": {
    "completion_tokens": 51,
    "prompt_tokens": 1068,
    "total_tokens": 1119
    }
    }

GPU

  • 大模型往往含有大量的参数和复杂的神经网络结构,需要高度的并行处理能力和大的内存带宽来有效地训练和运行。GPU在这些方面的优势让它成为运行大型AI模型的首选硬件。但这不意味着CPU完全无法运行这些模型,只是相对来说CPU运行起来效率低,可能会非常慢,因此在实际应用中通常选择GPU来进行深度学习任务。

Fine-tuning vs Embedding

  • If you are trying to “teach” the model new information, embeddings is the way to go. If you want to change the structure or way it response, then use fine-tuning.

  • Fine-tuning: Teach the model how to answer a question (e.g. structure/format, personality, etc)
  • Embedding: Provide the model with new/specific information with which to answer questions.

魔法指令

魔法指令 功能说明
%pwd 查看当前工作目录
%ls 列出当前或指定文件夹下的内容
%cat 查看指定文件的内容
%hist 查看输入历史
%matplotlib inline 设置在页面中嵌入matplotlib输出的统计图表
%config Inlinebackend.figure_format='svg' 设置统计图表使用SVG格式(矢量图)
%run 运行指定的程序
%load 加载指定的文件到单元格中
%quickref 显示IPython的快速参考
%timeit 多次运行代码并统计代码执行时间
%prun cProfile.run运行代码并显示分析器的输出
%who / %whos 显示命名空间中的变量
%xdel 删除一个对象并清理所有对它的引用

快捷键

JupyterLab 的快捷键可以分为命令模式下的快捷键和编辑模式下的快捷键,

所谓编辑模式就是处于输入代码或撰写文档状态的模式,在编辑模式下按Esc可以回到命令模式,在命令模式下按Enter可以进入编辑模式。

  • 命令模式下的快捷键:
快捷键 功能说明
Alt + Enter 运行当前单元格并在下面插入新的单元格
Shift + Enter 运行当前单元格并选中下方的单元格
Ctrl + Enter 运行当前单元格
j / kShift + j / Shift + k 选中下方/上方单元格、连续选中下方/上方单元格
a / b 在下方/上方插入新的单元格
c / x 复制单元格 / 剪切单元格
v / Shift + v 在下方/上方粘贴单元格
dd / z 删除单元格 / 恢复删除的单元格
Shift + l 显示或隐藏当前/所有单元格行号
Space / Shift + Space 向下/向上滚动页面
  • 编辑模式下的快捷键:
快捷键 功能说明
Shift + Tab 获得提示信息
Ctrl + ]/ Ctrl + [ 增加/减少缩进
Alt + Enter 运行当前单元格并在下面插入新的单元格
Shift + Enter 运行当前单元格并选中下方的单元格
Ctrl + Enter 运行当前单元格
Ctrl + Left / Right 光标移到行首/行尾
Ctrl + Up / Down 光标移动代码开头/结尾处
Up / Down 光标上移/下移一行或移到上/下一个单元格

说明:对于 macOS 系统可以将Alt键替换成Option键,将Ctrl键替换成Command键。