来源:王海文
%pip install -U langchain_community tiktoken langchain_openai chromadb langchain langchain_core
%pip install sentence_transformers
%pip install huggingface_hub
%pip install ipywidgets
%pip install unstructured
%pip install sentencepiece bs4
from langchain_community.document_loaders import WebBaseLoader
url = "https://techdiylife.github.io/blog/202401/240327-ollama-20question.html"
loader = WebBaseLoader(
web_paths=[url],
requests_kwargs={
"headers": {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3"
}
}
)
docs = loader.load()
from langchain_text_splitters import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(chunk_size=300, chunk_overlap=50)
split_text = text_splitter.split_documents(docs)
%pip install langchain_ollama socksio httpcore[socks]
from langchain_community.embeddings import OllamaEmbeddings
# from langchain_ollama import OllamaEmbeddings
from langchain_chroma import Chroma
# ollama下载embedding模型:https://ollama.ai/
# ollama pull mofanke/acge_text_embedding
# ollama pull qwen3-embedding:4b
embedding_model = OllamaEmbeddings(model="qwen3-embedding:4b") # 参数名应为 model 而不是 model_name
vectorstore = Chroma.from_documents(documents=split_text, embedding=embedding_model)
retriever = vectorstore.as_retriever()
print("向量数据库构建完成")
向量数据库构建完成
/tmp/ipykernel_1796418/1638132037.py:8: LangChainDeprecationWarning: The class `OllamaEmbeddings` was deprecated in LangChain 0.3.1 and will be removed in 1.0.0. An updated version of the class exists in the `langchain-ollama package and should be used instead. To use it run `pip install -U `langchain-ollama` and import as `from `langchain_ollama import OllamaEmbeddings``.
embedding_model = OllamaEmbeddings(model="qwen3-embedding:4b") # 参数名应为 model 而不是 model_name
向量数据库构建完成
from langchain_core.prompts import ChatPromptTemplate
template = """你是一个回答问题的助手。请使用以下检索到的信息来回答问题。如果没有相关的信息,请回答"没有找到相关的信息。"。请用最多三句话来保持回答的简洁性
问题: {question}
背景: {context}
答案:
"""
prompt = ChatPromptTemplate.from_template(template)
# 也可以从hub中加载
# from langchain import hub
# prompt = hub.load("tongshi/prompt_template_ragopenai")
# 在线api
# import os
# from dotenv import load_dotenv
# from langchain_openai import ChatOpenAI
# load_dotenv() # loads contents of the .env file
# llm = ChatOpenAI(
# openai_api_base="https://api.openai.com/v1",
# openai_api_version="2020-11-07",
# openai_api_key=os.getenv("OPENAI_API_KEY")
# model="gpt-3.5-turbo",
# )
# ==============================================
# 本地ollama
from langchain_community.llms import Ollama
llm = Ollama(model="deepseek-r1:7b")
# from langchain_ollama import OllamaLLM
# llm = OllamaLLM(model="deepseek-r1:7b")
/tmp/ipykernel_1796418/3311084766.py:18: LangChainDeprecationWarning: The class `Ollama` was deprecated in LangChain 0.3.1 and will be removed in 1.0.0. An updated version of the class exists in the `langchain-ollama package and should be used instead. To use it run `pip install -U `langchain-ollama` and import as `from `langchain_ollama import OllamaLLM``.
llm = Ollama(model="deepseek-r1:7b")
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
def format_docs(docs):
return "\n\n".join([f"{d.page_content}\n\n" for d in docs])
rag_chain = (
{"context":retriever | format_docs, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
# RunnablePassthrough():将原始输入传送给下一个节点
result = rag_chain.invoke("ollama支持哪些模型?")
print(result)
ollama支持的模型包括70b、8b等版本,具体取决于下载和配置。instruct标签通常指代中文生成模型,text标签则指代中文推理模型,默认为text-3.5-turbo-instruct。官方支持的详细信息可参考https://ollama.com/library。
test_questions = [
"ollama支持那些模型?",
"Linux系统上,模型下载之后,模型存放在哪里?",
"Windows系统上,如何修改默认下载位置?",
"Modelfile是什么?",
"如何导入safetensors格式的模型?",
"有什么webui可用吗?",
]
for i,q in enumerate(test_questions[:3],start=1):
print(f"问题 {i} : {q}")
print(f"答案 {i} : {rag_chain.invoke(q)}\n")
问题 1 : ollama支持那些模型?
答案 1 : Ollama模型库中,70b和8b分别对应Qwen-7B和Qwen-8B模型;instruct是Instruct系列指令模型,text是专门的文本生成模型。
问题 2 : Linux系统上,模型下载之后,模型存放在哪里?
答案 2 : 在 Linux 系统上,默认情况下,下载好的模型会被存放在 `/home/<username>/.ollama/models` 的位置。
问题 3 : Windows系统上,如何修改默认下载位置?
答案 3 : 如何修改下载模型的默认存放目录?
在 Windows 系统中,可以进入 Documentary 或 Document folder 设置,默认存储位置为 C:\Users\<username>\.ollama\models。打开系统属性,选择“文档”或“更多文件夹”,找到 ollama_models 文件夹并修改其路径。
%ollama pull mxbai-embed-large
# 或
%ollama pull mofanke/acge_text_embedding
# 或
%ollama pull qwen3-embedding:4b
from langchain_community.embeddings import OllamaEmbeddings
# from langchain_ollama import OllamaEmbeddings
embedding_model = OllamaEmbeddings(model="qwen3-embedding:4b")
# 本地ollama
from langchain_community.llms import Ollama
llm = Ollama(model="deepseek-r1:7b")
# llm = Ollama(model="qwen3")
支持流式输出:
for chunk in llm.stream("讲个笑话"): print(chunk,end="",flush=True)
prompt = hub.pull("rlm/rag-prompt")
prompt.message[0].prompt.template = """你是一个回答问题的助手。请使用以下检索到的信息来回答问题。如果没有相关的信息,请回答"没有找到相关的信息。"。请用最多三句话来保持回答的简洁性
问题: {question}
背景: {context}
答案: """
或者直接用ChatPromptTemplate.from_template()自定义
retriever = vectorstore.as_retriever(search_kwargs={"k":3}) # 返回前三个结果
你是一个回答问题的助手。请使用以下检索到的信息来回答问题……
问题: {question}
背景: {context}
答案: """
Runnable 是LangChain的抽象接口,它定义了如何运行一个任务,支持统一调用各种组件(LLM, Prompt, Chain, Agent, Parser等)
.invoke(inputs): 运行一个任务,返回结果.batch(inputs, batch_size=1): 批量运行任务,返回结果.stream(inputs): 运行一个任务,返回结果,支持流式输出.bind(**kwargs): 绑定参数,返回一个新的Runnable对象.ainvoke(inputs): 异步运行一个任务,返回结果.abatch(inputs, batch_size=1): 异步批量运行任务,返回结果chain = prompt | llm | StrOutputParser()
chain.invoke({"topic": "bears"})
# TXT文件
from langchain.document_loaders import TextLoader
loader = TextLoader('data/guide.txt')
docs = loader.load()
# HTML文件
from langchain.document_loaders import WebBaseLoader
loader = WebBaseLoader("example.html")
docs = loader.load()
# PDF文件(需要安装pypdf2)
from langchain.document_loaders import PyPDFLoader
loader = PyPDFLoader("example.pdf")
docs = loader.load()
# from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_huggingface import HuggingFaceEmbeddings
from langchain_chroma import Chroma
model_name = "sentence-transformers/all-MiniLM-L6-v2"
model_kwargs = {"device": "gpu"}
ecode_kwargs = {"normalize_embeddings": False}
hf_embeddings = HuggingFaceEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=ecode_kwargs)
vector_store = Chorma.from_documents(splits, embedding=hf_embeddings)
transformers+pipelinefrom transformers import AutoModelForCausalLm, AutoTokenizer, pipeline
from langchain.llms import HuggingFacePipeline
model_id = "THUDM/chatglm3-6b"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLm.from_pretrained(model_id, trust_remote_code=True, device_map="auto")
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, new_max_length=512)
llm = HuggingFacePipeline(pipeline=pipe)
from langchain_huggingface import HuggingFacePipeLine
llm = HuggingFacePipeLine.from_model_id(
model_id="google/gemma2-7b",
task="text-generation",
device_map="auto",
model_kwargs={"temperature": 0.7, "max_length": 512,"trust_remote_code": True}
)
| 方向 | 描述 |
|---|---|
| 模型升级 | 使用更大更强的模型 |
| Embedding优化 | 使用更高效的Embedding模型 |
| 文本切分 | 调整chunk_size (256 |
| 检索策略 | 修改K 值 (1 |
| Prompt工程 | 明确指令、限制输出格式、添加示例(few-shot) |
| 引入Rerank | 使用bge-rerank对检索结果重排序 |
由腾讯元宝辅助生成
基于代码分析,和风天气API目前主要使用JWT认证方式,以下是详细使用说明:
和风天气API采用JWT (JSON Web Token) 认证,替代了传统的API Key方式:
密钥生成:
openssl genpkey -algorithm ED25519 -out ed25519-private.pem \
&& openssl pkey -pubout -in ed25519-private.pem > ed25519-public.pem
JWT生成:
import jwt
import time
from cryptography.hazmat.primitives.serialization import load_pem_private_key
def get_jwt():
# 加载私钥
with open('ed25519-private.pem', 'rb') as f:
private_key = load_pem_private_key(f.read(), password=None)
# 设置payload
payload = {
'iat': int(time.time()) - 30, # 签发时间(提前30秒避免时间误差)
'exp': int(time.time()) + 900, # 过期时间(15分钟)
'sub': PROJECT_ID # 项目ID
}
# 设置headers
headers = {'kid': KEY_ID} # 密钥ID
# 生成JWT
encoded_jwt = jwt.encode(payload, private_key, algorithm='EdDSA', headers=headers)
return encoded_jwt
使用方式:
Authorization: Bearer {JWT令牌}基础URL:
https://{API_HOST}/{version}/{endpoint}
abcd.as.qweatherapi.com)v7)weather/now)请求方法:
请求参数:
城市查询接口:
def get_city_code(city, jwt_token):
url = f"https://{API_HOST}/geo/v2/city/lookup?location={city}&number=1&lang=zh"
headers = {'Authorization': f'Bearer {jwt_token}'}
r = requests.get(url, headers=headers)
data = r.json()
if data.get("code") == "200" and data.get("location"):
return data["location"][0]["id"] # 返回城市ID
return None
天气查询接口:
def get_weather(city_id, jwt_token):
url = f"https://{API_HOST}/v7/weather/now?location={city_id}&lang=zh"
headers = {'Authorization': f'Bearer {jwt_token}'}
r = requests.get(url, headers=headers)
data = r.json()
if data.get("code") == "200" and data.get("now"):
return data["now"] # 返回天气数据
return None
API返回JSON格式数据,包含以下主要字段:
code: 响应状态码(200表示成功)location: 位置信息(城市查询接口返回)now: 实时天气数据(天气查询接口返回)天气数据字段说明:
temp: 温度text: 天气状况windDir: 风向windScale: 风力humidity: 湿度icon: 天气图标代码# 完整流程示例
jwt_token = get_jwt()
city = "西安"
city_id = get_city_code(city, jwt_token)
weather = get_weather(city_id, jwt_token)
if weather:
print(f"{city}的天气信息:")
print(f"温度: {weather.get('temp', '未知')}°C")
print(f"天气: {weather.get('text', '未知')}")
print(f"风向: {weather.get('windDir', '未知')}")
print(f"风力: {weather.get('windScale', '未知')}级")
print(f"湿度: {weather.get('humidity', '未知')}%")
requests库时,添加异常处理确保程序稳定性weather_widget.pyimport jwt
import requests
import os
import time
from flask import Flask, jsonify, request
from flask_cors import CORS
from cryptography.hazmat.primitives.serialization import load_pem_private_key
from cryptography.hazmat.backends import default_backend
app = Flask(__name__)
CORS(app) # CORS 跨域请求支持
# 配置信息
API_HOST = "" # https://console.qweather.com/setting?lang=zh
PROJECT_ID = "" # 项目ID
KEY_ID = "" # 凭据ID
PRIVATE_KEY_PATH = os.path.join(os.path.dirname(__file__), 'ed25519-private.pem')
def get_jwt():
"""生成JWT令牌"""
with open(PRIVATE_KEY_PATH, 'rb') as f:
private_key = load_pem_private_key(f.read(), password=None, backend=default_backend())
payload = {
'iat': int(time.time()) - 30,
'exp': int(time.time()) + 900,
'sub': PROJECT_ID
}
headers = {'kid': KEY_ID}
return jwt.encode(payload, private_key, algorithm='EdDSA', headers=headers)
def get_city_code(city, jwt_token):
"""获取城市代码"""
url = f"https://{API_HOST}/geo/v2/city/lookup?location={city}&number=1&lang=zh"
headers = {'Authorization': f'Bearer {jwt_token}'}
try:
r = requests.get(url, headers=headers)
r.raise_for_status()
data = r.json()
if data.get("code") == "200" and data.get("location"):
return data["location"][0]["id"]
except Exception as e:
app.logger.error(f"获取城市代码失败: {e}")
return None
def get_weather(city_id, jwt_token):
"""获取天气信息"""
url = f"https://{API_HOST}/v7/weather/now?location={city_id}&lang=zh"
headers = {'Authorization': f'Bearer {jwt_token}'}
try:
r = requests.get(url, headers=headers)
r.raise_for_status()
data = r.json()
if data.get("code") == "200" and data.get("now"):
return data["now"]
except Exception as e:
app.logger.error(f"获取天气失败: {e}")
return None
@app.route('/weather', methods=['GET'])
def weather():
"""天气API端点"""
city = request.args.get('city', '北京')
jwt_token = get_jwt()
if not jwt_token:
return jsonify({"error": "JWT生成失败"}), 500
city_id = get_city_code(city, jwt_token)
if not city_id:
return jsonify({"error": f"找不到城市: {city}"}), 404
weather_data = get_weather(city_id, jwt_token)
if not weather_data:
return jsonify({"error": "天气数据获取失败"}), 500
return jsonify({
"city": city,
"temperature": weather_data.get('temp', 'N/A'),
"condition": weather_data.get('text', 'N/A'),
"windDir": weather_data.get('windDir', 'N/A'),
"windScale": weather_data.get('windScale', 'N/A'),
"humidity": weather_data.get('humidity', 'N/A'),
"icon": weather_data.get('icon', '100')
})
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000, debug=True)
weather.html<!DOCTYPE html>
<html>
<head>
<title>天气小组件</title>
<style>
.weather-widget { font-family: Arial, sans-serif; background: rgba(255,255,255,0.9); padding: 16px; border-radius: 24px; font-size: 14px; box-shadow: 0 2px 8px rgba(0,0,0,0.1); width: 280px; margin: 20px auto; }
.weather-form { display: flex; align-items: center; gap: 8px; margin-bottom: 12px; }
.weather-form input { padding: 8px 12px; border: 1px solid #e0e0e0; border-radius: 16px; flex-grow: 1; }
.weather-form button { padding: 8px 16px; background: #4285f4; color: white; border: none; border-radius: 16px; cursor: pointer; }
.weather-info { display: flex; align-items: center; gap: 12px; }
.weather-icon { font-size: 32px; min-width: 40px; text-align: center; }
.weather-details { line-height: 1.5; }
.temperature { font-size: 24px; font-weight: bold; }
.condition { font-size: 16px; }
.wind-humidity { font-size: 12px; color: #666; }
</style>
<link href="https://cdn.jsdelivr.net/npm/qweather-icons@1.7.0/font/qweather-icons.css" rel="stylesheet">
</head>
<body>
<div class="weather-widget">
<div class="weather-form">
<input type="text" id="city-input" placeholder="输入城市名" value="西安">
<button id="get-weather">查询</button>
</div>
<div class="weather-info" id="weather-container">
<div class="weather-icon"><i class="qi-999" id="weather-icon"></i></div>
<div class="weather-details">
<div class="temperature" id="temperature">--°C</div>
<div class="condition" id="condition">加载中...</div>
<div class="wind-humidity">
<span id="wind">风向: --</span> | <span id="humidity">湿度: --%</span>
</div>
</div>
</div>
</div>
<script>
document.getElementById('get-weather').addEventListener('click', fetchWeather);
window.addEventListener('DOMContentLoaded', fetchWeather);
async function fetchWeather() {
const city = document.getElementById('city-input').value;
const container = document.getElementById('weather-container');
const originalHTML = container.innerHTML;
container.innerHTML = '<div style="text-align:center;padding:10px">加载中...</div>';
try {
const response = await fetch(`http://127.0.0.1:5000/weather?city=${encodeURIComponent(city)}`);
const text = await response.text();
const data = JSON.parse(text);
if (data.error) {
container.innerHTML = `<div style="color:red;text-align:center">${data.error}</div>`;
return;
}
if (!data.temperature || !data.condition || !data.windDir || !data.windScale || !data.humidity || !data.icon) {
container.innerHTML = '<div style="color:red;text-align:center">数据不完整: 缺少必要字段</div>';
return;
}
container.innerHTML = originalHTML;
document.getElementById('weather-icon').className = `qi-${data.icon}`;
document.getElementById('temperature').textContent = `${data.temperature}°C`;
document.getElementById('condition').textContent = data.condition;
document.getElementById('wind').textContent = `风向: ${data.windDir} ${data.windScale}级`;
document.getElementById('humidity').textContent = `湿度: ${data.humidity}%`;
} catch (error) {
console.error('请求错误:', error);
container.innerHTML = `<div style="color:red;text-align:center">${error.name === 'SyntaxError' ? '数据格式错误' : '网络请求失败'}: ${error.message}</div>`;
}
}
</script>
</body>
</html>
]]>再次复制一下整个的yml
name: Generate README
on:
issues:
types: [opened, edited]
issue_comment:
types: [created, edited]
push:
branches:
- main
paths:
- main.py
env:
GITHUB_NAME: SylverQG
GITHUB_EMAIL: doublc_qluv@163.com
jobs:
sync:
name: Generate README
runs-on: ubuntu-latest
if: github.repository_owner_id == github.event.issue.user.id || github.event_name == 'push'
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: 3.13
cache: pip
cache-dependency-path: "requirements.txt"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
- name: Generate new md
run: |
source venv/bin/activate
python main.py ${{ secrets.G_T }} ${{ github.repository }} --issue_number '${{ github.event.issue.number }}'
- name: Push README
run: |
git config --local user.email "${{ env.GITHUB_EMAIL }}"
git config --local user.name "${{ env.GITHUB_NAME }}"
git add BACKUP/*.md
git commit -a -m 'update new blog' || echo "nothing to commit"
git push || echo "nothing to push"
]]>一个公益加速项目,仅需在 github.com 前加上’k‘即可,若提示访问限制请刷新。
任意 github 的项目,均可在网址前面加上 k 变为 kgithub.com 就正常了,如无法使用大概率使用的是国外 IP 访问呢,该服务必须使用国内 ip 访问。
如:github.com/XIU2/UserScript 将链接更换后 https://github.com/X/UserScript https://kgithub.com/X/UserScript
一个国内加速项目,只需要把 github.com 替换为 github.hscsec.cn 就可以解决访问问题。
如:github.com/X/UserScript 将链接更换后 https://github.com/X/UserScript https://github.hscsec.cn/X/UserScript
]]>考研数学一的线性代数部分的知识点很多,以下是一些速记方法: 记住线性代数的基本概念,包括向量、矩阵、线性方程组、特征值、特征向量等。这些基本概念是理解后续知识点的基础。 掌握矩阵的运算,包括矩阵的加法、减法、乘法、转置等。这些运算在实际问题中经常用到,需要熟练掌握。 理解矩阵的行列式和秩的概念,掌握它们的计算方法。行列式和秩是矩阵的重要性质,对于解决线性方程组和特征值等问题非常重要。 掌握向量组的线性相关性和无关性的概念,掌握向量组极大线性无关组的计算方法。这些概念和计算方法是解决线性方程组和特征值等问题的基础。 掌握线性方程组的求解方法,包括克莱姆法则、高斯消元法、矩阵分解法等。这些方法在实际问题中经常用到,需要熟练掌握。 了解特征值和特征向量的概念,掌握特征值的计算方法和特征向量的求解方法。特征值和特征向量是矩阵的重要性质,对于解决线性方程组和相似对角化等问题非常重要。 掌握正交变换的概念和性质,包括正交矩阵的定义和性质、正交变换的性质等。正交变换是解决二次型等问题的基础,需要熟练掌握。 以上是一些考研数学一线性代数部分的速记方法,但这些知识点需要反复练习和巩固才能真正掌握。建议多做一些相关的练习题和模拟题,同时可以参考一些教材和辅导资料,加深对知识点的理解和记忆。
考研数学一的概率论部分的知识速记如下: 随机事件: 互斥对立加减功,条件立乘除清; 概逆概百分比,二项分布是核心; 然事件随便用,选择先试不可能。 随机变量及其概率分布: 随机变量注意分布,离散连续不可疏; 离散注意分布律,连续分布看密度; 分布函数两函数,考试只考求分布。 随机变量的数字特征: 期望方差加协方,数学期望定投方; 方差协方别混淆,期望方差均可用; 数字特征都适用,考试只考大期望。 大数定律和中心极限定理: 切比雪夫不等式,强大数律中心极; 切比雪夫考最多,其他三个考选择。 数理统计的基本概念: 总体个体犯错误,统计总体要犯错; 样本容量的定义,样本个体要记清; 样本均值与方差,两个统计别搞混; 总体分布函数好,样本分布函数妙。 参数估计: 点估计与区间估计,矩估计与最大似然估计; 点估计优缺点,区间估计优缺点; 最大似然优缺点,考试只考矩估计。 以上是考研数学一概率论部分的速记口诀,希望对您有所帮助。
]]> 发送时延=\frac{数据帧长度(b)}{发送速率(b/s)}
传输时延=\frac{信道长度(m)}{电磁波在信道上的传播率(m/s)}
OSI0-1背包问题本仓库搭建概况
ActionFlow版本(当然应该不是最合适的版本,好像还是有Node.js 12 to Node.js 16 的警告)搭建过程的一些问题(遇到的,并从issue里总结的)
.github/workflows/generate_readme.yml中env项的name和email设置更改为自己的.github/workflows/generate_readme.yml的其他修改如下:即添加注释的地方 name: Generate README
on:
issues:
types: [opened, edited]
issue_comment:
types: [created, edited]
push:
branches:
- main
paths:
- main.py
env:
GITHUB_NAME: SylverQG
GITHUB_EMAIL: doublc_qluv@163.com
jobs:
sync:
name: Generate README
runs-on: ubuntu-latest
if: github.repository_owner_id == github.event.issue.user.id || github.event_name == 'push'
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v3
with:
python-version: 3.9
cache: pip
cache-dependency-path: "requirements.txt"
# - name: Configure pip cache
# uses: actions/cache@v2
# id: pip-cache
# with:
# path: venv
# key: pip-1-${{ "hashFiles('**/requirements.txt')" }}
# restore-keys: |
# pip-
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
# if: steps.pip-cache.outputs.cache-hit != 'true'
- name: Generate new md
run: |
source venv/bin/activate
python main.py ${{ secrets.G_T }} ${{ github.repository }} --issue_number '${{ github.event.issue.number }}'
# - name: Push README
# uses: github-actions-x/commit@v2.9
# with:
# github-token: ${{ secrets.G_T }}
# commit-message: "UPDATE README"
# files: BACKUP/*.md README.md feed.xml
# rebase: 'true'
# name: SylverQG
# email: doublc_qluv@163.com
- name: Push README
run: |
git config --local user.email "${{ env.GITHUB_EMAIL }}"
git config --local user.name "${{ env.GITHUB_NAME }}"
git add BACKUP/*.md
git commit -a -m 'update new blog' || echo "nothing to commit"
git push || echo "nothing to push"
警告详情 actions/checkout@v2,actions/setup-python@v1,actions/cache@v1 update to actions/checkout@v3,actions/setup-python@v2,actions/cache@v2
Generate README
Node.js 12 actions are deprecated. Please update the following actions to use Node.js 16: actions/setup-python@v2, actions/cache@v2. For more information see: https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead-of-node12/.
github-actions-x/commit@v2.6 update to v2.9 (the latest)
the error shows:Command line: | /usr/bin/git pull --rebase --autostash origin main
Stderr: | fatal: detected dubious ownership in repository at '/github/workspace'
| To add an exception for this directory, call:
|
| git config --global --add safe.directory /github/workspace
生成通过但是仓库没有变更 ,此时可以重新生成Token,重新添加到仓库的设置中。[这与本地的错误类似,即没有当前仓库的权限]remote: Permission to SylverQG/Blogs.git denied to github-actions[bot].
fatal: unable to access 'https://github.com/SylverQG/Blogs/': The requested URL returned error: 403
nothing to push
]]>To make blogs by the issues
To say goodbye to the old world
To put the ping in the e-world
a simple solution to a complicated problem,You will die but github long live.
尝试了很多种类的github博客形式。、
issue并有一套漂亮前端的形式
之后考虑从原来的GitHub仓库一点一点搬运,还是也把那个github.io也搞成这样,或者再找个好看的主题也不是不行