小编wav*_*per的帖子

Tfa.layers.ESN 示例

有没有人有一个实施每个状态网络的工作示例 Tfa.layers.ESN.

目前,我有以下模型

Model: "model"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
_________________________________________________________________
digits (InputLayer)          [(None, 41, 4)]           0         
_________________________________________________________________
ESN                          (None, 41, 242)           30492     
_________________________________________________________________
flatten (Flatten)            (None, 9922)              0         
_________________________________________________________________
predictions (Dense)          (None, 2)                 19846  
_________________________________________________________________   
Total params: 50,338
Trainable params: 19,846
Non-trainable params: 30,492
_________________________________________________________________
Run Code Online (Sandbox Code Playgroud)

我遇到的问题是,一旦增加隐藏单元的数量,在训练和测试数据集中,模型的准确性就会下降而不是增加。而且,我得到的准确率非常低。

python tensorflow

8
推荐指数
0
解决办法
625
查看次数

如何将HuggingFace的Seq2seq模型转换为onnx格式

我正在尝试将 HuggingFace 变压器模型中的 Pegasus 新闻编辑室转换为 ONNX 格式。我遵循了Huggingface 发布的指南安装先决条件后,我运行了以下代码:

!rm -rf onnx/
from pathlib import Path
from transformers.convert_graph_to_onnx import convert

convert(framework="pt", model="google/pegasus-newsroom", output=Path("onnx/google/pegasus-newsroom.onnx"), opset=11)

Run Code Online (Sandbox Code Playgroud)

并得到这些错误:

ValueError                                Traceback (most recent call last)
<ipython-input-9-3b37ed1ceda5> in <module>()
      3 from transformers.convert_graph_to_onnx import convert
      4 
----> 5 convert(framework="pt", model="google/pegasus-newsroom", output=Path("onnx/google/pegasus-newsroom.onnx"), opset=11)
      6 
      7 

6 frames
/usr/local/lib/python3.6/dist-packages/transformers/models/pegasus/modeling_pegasus.py in forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, head_mask, encoder_head_mask, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)
    938             input_shape = inputs_embeds.size()[:-1]
    939         else:
--> 940             raise ValueError("You have to specify either …
Run Code Online (Sandbox Code Playgroud)

python tensorflow pytorch onnx huggingface-transformers

7
推荐指数
1
解决办法
5748
查看次数

ValueError:您必须指定decoder_input_ids或decoder_inputs_embeds

尝试将 t5 模型转换question-generationtorchscript model,同时执行此操作时遇到此错误

ValueError:您必须指定decoder_input_ids或decoder_inputs_embeds

这是我在 colab 上运行的代码。

!pip install -U transformers==3.0.0
!python -m nltk.downloader punkt

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch

model = AutoModelForSeq2SeqLM.from_pretrained('valhalla/t5-base-qg-hl')

t_input =  'Python is a programming language. It is developed by <hl> Guido Van Rossum <hl>. </s>'

tokenizer = AutoTokenizer.from_pretrained('valhalla/t5-base-qg-hl', return_tensors = 'pt')

def _tokenize(
    inputs,
    padding=True,
    truncation=True,
    add_special_tokens=True,
    max_length=64
):
    inputs = tokenizer.batch_encode_plus(
        inputs, 
        max_length=max_length,
        add_special_tokens=add_special_tokens,
        truncation=truncation,
        padding="max_length" if padding else False,
        pad_to_max_length=padding,
        return_tensors="pt"
    )
    return inputs

token = …
Run Code Online (Sandbox Code Playgroud)

python deep-learning torchscript huggingface-transformers

5
推荐指数
1
解决办法
1万
查看次数