加载中...

【linux 使用ollama部署运行本地大模型完整的教程,openai接口, llama2例子】

【linux 使用ollama部署运行本地大模型完整的教程,openai接口, llama2例子】

# 安装相应的包

  1. # linux 安装
  2. curl -fsSL https://ollama.com/install.sh | sh
  3. pip install ollama

# 开启ollama服务端!

$ ollama serve

# 启动llama2大模型(新开一个终端)

  1. # autodl开启加速(其他平台省略)
  2. $ source /etc/network_turbo
  3. $ ollama run llama2-uncensored:7b-chat-q6_K

# 如果不想启动运行,只下载可以

  1. # 拉取模型
  2. $ ollama pull llama2-uncensored:7b-chat-q6_K

在启动完后,就可以对话了

New Image

# python接口对话

  1. import ollama
  2. response = ollama.chat(model=''llama2'', messages=[
  3. {
  4. ''role'': ''user'',
  5. ''content'': ''Why is the sky blue?'',
  6. },
  7. ])
  8. print(response[''message''][''content''])

New Image 

# OpenAI适配接口对话

  1. from openai import OpenAI
  2. client = OpenAI(
  3. base_url = ''http://localhost:11434/v1'',
  4. api_key=''ollama'', # required, but unused
  5. )
  6. response = client.chat.completions.create(
  7. model="llama2",
  8. messages=[
  9. {"role": "system", "content": "You are a helpful assistant."},
  10. {"role": "user", "content": "Who won the world series in 2020?"},
  11. {"role": "assistant", "content": "The LA Dodgers won in 2020."},
  12. {"role": "user", "content": "Where was it played?"}
  13. ]
  14. )
  15. print(response.choices[0].message.content)

New Image

# CUR流式接口

  1. curl -X POST http://localhost:11434/api/generate -d ''{
  2. "model": "llama2",
  3. "prompt":"Why is the sky blue?"
  4. }''

New Image

# 参考

llama2 (ollama.com)New Imagehttps://ollama.com/library/llama2

OpenAI compatibility · Ollama BlogNew Imagehttps://ollama.com/blog/openai-compatibility