我正在尝试使用 SSL 运行 fastapi 应用程序。
我正在使用 uvicorn 运行该应用程序。
我可以使用 HTTP 在端口 80 上运行服务器,
if __name__ == '__main__':
uvicorn.run("main:app", port=80, host='0.0.0.0', reload = True, reload_dirs = ["html_files"])
Run Code Online (Sandbox Code Playgroud)
要使用 HTTPS 运行端口,我执行以下操作:
if __name__ == '__main__':
uvicorn.run("main:app", port=443, host='0.0.0.0', reload = True, reload_dirs = ["html_files"], ssl_keyfile="/etc/letsencrypt/live/my_domain/privkey.pem", ssl_certfile="/etc/letsencrypt/live/my_domain/fullchain.pem")
Run Code Online (Sandbox Code Playgroud)
我如何运行两者或简单地集成 https 重定向?
注意:这是在我不想使用 nginx 的服务器上进行的设置,我知道如何使用 nginx 来实现 https 重定向。
我试图用 torchmeta 创建一个 pytorch 分布式数据laoder,但它因死锁而失败:
python ~/ultimate-utils/tutorials_for_myself/my_torchmeta/torchmeta_ddp.py
test_basic_ddp_example
ABOUT TO SPAWN WORKERS (via mp.spawn)
-> started ps with rank=0
-> rank=0
-> mp.current_process()=<SpawnProcess name='SpawnProcess-1' parent=54167 started>
-> os.getpid()=54171
device=device(type='cpu')
----> setting up rank=0 (with world_size=4)
---> MASTER_ADDR='127.0.0.1'
---> 57813
---> backend='gloo'
-> started ps with rank=2
-> rank=2
-> mp.current_process()=<SpawnProcess name='SpawnProcess-3' parent=54167 started>
-> os.getpid()=54173
device=device(type='cpu')
----> setting up rank=2 (with world_size=4)
---> MASTER_ADDR='127.0.0.1'
---> 57813
---> backend='gloo'
-> started ps with rank=1
-> rank=1
-> mp.current_process()=<SpawnProcess name='SpawnProcess-2' parent=54167 started> …Run Code Online (Sandbox Code Playgroud) 我正在尝试BertForSequenceClassification一个简单的文章分类任务。
无论我如何训练它(冻结除分类层之外的所有层,所有层均可训练,最后k一层可训练),我总是得到几乎随机的准确度分数。我的模型训练准确率不超过 24-26%(我的数据集中只有 5 个类)。
我不确定在设计/训练模型时我做错了什么。我用多个数据集尝试了该模型,每次它都给出相同的随机基线精度。
我使用的数据集:BBC 文章(5 类)
https://github.com/zabir-nabil/pytorch-nlp/tree/master/bbc
包含来自 BBC 新闻网站的 2225 份文档,对应 2004 年至 2005 年五个主题领域的故事。自然课程:5(商业、娱乐、政治、体育、科技)
我添加了模型部分和训练部分,这是最重要的部分(以避免任何不相关的细节)。如果这对再现性有用,我也添加了完整的源代码+数据。
我的猜测是我设计网络的方式或者我将注意力掩码/标签传递给模型的方式有问题。此外,令牌长度 512 应该不是问题,因为大多数文本的长度 < 512(平均长度 < 300)。
型号代码:
import torch
from torch import nn
class BertClassifier(nn.Module):
def __init__(self):
super(BertClassifier, self).__init__()
self.bert = BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels = 5)
# as we have 5 classes
# we want our output as probability so, in the evaluation mode, we'll pass the logits to a softmax layer
self.softmax = torch.nn.Softmax(dim = …Run Code Online (Sandbox Code Playgroud) 我用python编写了一个简单的GRPC服务,客户端代码。有时,客户端会突然失败并显示以下错误:
Traceback (most recent call last):
File "grpc_client.py", line 35, in <module>
response = stub.YOLO_frame(image_req)
File "/home/vmuser/anaconda3/envs/lp_reg_brta/lib/python3.7/site-packages/grpc/_channel.py", line 923, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/home/vmuser/anaconda3/envs/lp_reg_brta/lib/python3.7/site-packages/grpc/_channel.py", line 826, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses"
debug_error_string = "{"created":"@1613478605.328638006","description":"Failed to pick subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":5390,"referenced_errors":[{"created":"@1613478605.328628806","description":"failed to connect to all addresses","file":"src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc","file_line":397,"grpc_status":14}]}"
>
Run Code Online (Sandbox Code Playgroud)
我的服务器:
import grpc
from concurrent import futures
import time
import darknet
# import the generated classes …Run Code Online (Sandbox Code Playgroud) 在 tensorflow/keras 中,我们可以简单地设置return_sequences = False分类/全连接/激活(softmax/sigmoid)层之前的最后一个 LSTM 层,以摆脱时间维度。
在 PyTorch 中,我没有找到类似的东西。对于分类任务,我不需要序列到序列模型,而是像这样的多对一架构:
这是我的简单双 LSTM 模型。
import torch
from torch import nn
class BiLSTMClassifier(nn.Module):
def __init__(self):
super(BiLSTMClassifier, self).__init__()
self.embedding = torch.nn.Embedding(num_embeddings = 65000, embedding_dim = 64)
self.bilstm = torch.nn.LSTM(input_size = 64, hidden_size = 8, num_layers = 2,
batch_first = True, dropout = 0.2, bidirectional = True)
# as we have 5 classes
self.linear = nn.Linear(8*2*512, 5) # last dimension
def forward(self, x):
x = self.embedding(x)
print(x.shape)
x, _ = self.bilstm(x)
print(x.shape) …Run Code Online (Sandbox Code Playgroud) llama2-chat-13b我从模型花园部署。但是,我在尝试执行推理时遇到错误。
配置:
project="X";
endpoint_id="Y";
location="us-east1";
64 VCPUs, 57.6 GB RAM;
GPU= 4 T4;
Run Code Online (Sandbox Code Playgroud)
我尝试了三种方法,但它们都返回某种错误:
方法一:
from typing import Dict, List, Union
from google.cloud import aiplatform
from google.protobuf import json_format
from google.protobuf.struct_pb2 import Value
def predict_custom_trained_model_sample(
project: str,
endpoint_id: str,
instances: Union[Dict, List[Dict]],
location: str = "us-east1",
api_endpoint: str = "us-east1-aiplatform.googleapis.com",
):
"""
`instances` can be either single instance of type dict or a list
of instances.
"""
# The AI Platform services require regional API endpoints.
client_options = {"api_endpoint": …Run Code Online (Sandbox Code Playgroud) google-cloud-platform google-cloud-vertex-ai google-generativeai llama
python-3.x ×3
pytorch ×3
fastapi ×1
grpc ×1
grpc-python ×1
llama ×1
lstm ×1
nlp ×1
ssl ×1
tensorflow ×1
uvicorn ×1