我正在使用 Google Colab 使用 Python3 和 PyTorch 1.8 在 MNIST 上训练 LeNet-300-100 全连接神经网络。
要应用转换并下载 MNIST 数据集,将使用以下代码:
# MNIST dataset statistics:
# mean = tensor([0.1307]) & std dev = tensor([0.3081])
mean = np.array([0.1307])
std_dev = np.array([0.3081])
transforms_apply = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean = mean, std = std_dev)
])
Run Code Online (Sandbox Code Playgroud)
这给出了错误:
下载 http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz到 ./data/MNIST/raw/train-images-idx3-ubyte.gz -------- -------------------------------------------------- ----------------- HTTPError Traceback (most recent call last) in () 2 train_dataset = torchvision.datasets.MNIST(3 root = './data', train = True, ----> 4 变换 = 变换_应用,下载 = 真 5 …
我已经在Python 3.7.4 [64位]中安装了tensorflow和numpy。尝试导入时,收到以下警告:
/home/user/.local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:516:FutureWarning:不赞成将(type,1)或'1type'作为type的同义词使用;在numpy的未来版本中,它将被理解为(type,(1,))/'(1,)type'。_np_qint8 = np.dtype([(“ qint8”,np.int8,1)])/home/user/.local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning:不建议使用(type,1)或'1type'作为type的同义词;在numpy的未来版本中,它将被理解为(type,(1,))/'(1,)type'。_np_quint8 = np.dtype([(“ quint8”,np.uint8,1)])/home/user/.local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning:不建议使用(type,1)或'1type'作为type的同义词;在numpy的未来版本中,它将被理解为(type,(1,))/'(1,)type'。_np_qint16 = np.dtype([[“” qint16“,np.int16,1)])/home/user/.local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning:不建议使用(type,1)或'1type'作为type的同义词;在numpy的未来版本中,它将被理解为(type,(1,))/'(1,)type'。_np_quint16 = np.dtype([(“ quint16”,np.uint16,1)])/home/user/.local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning:不建议使用(type,1)或'1type'作为type的同义词;在numpy的未来版本中,它将被理解为(type,(1,))/'(1,)type'。_np_qint32 = np.dtype([(“ qint32”,np.int32,1)])/ home / user /。local / lib / python3.7 / site-packages / tensorflow / python / framework / dtypes.py:525:FutureWarning:不赞成将(type,1)或'1type'作为type的同义词传递;在numpy的未来版本中,它将被理解为(type,(1,))/'(1,)type'。np_resource = np.dtype([(“ resource”,np.ubyte,1)])
/home/user/.local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541:FutureWarning:不建议将(type,1)或'1type'作为type的同义词使用;在numpy的未来版本中,它将被理解为(type,(1,))/'(1,)type'。_np_qint8 = np.dtype([(“ qint8”,np.int8,1)])/home/user/.local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning:不建议使用(type,1)或'1type'作为type的同义词;在numpy的未来版本中,它将被理解为(type,(1,))/'(1,)type'。_np_quint8 = np.dtype([(“ quint8”,np.uint8,1)])/home/user/.local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning:传递(类型1)或“ 1type” 不推荐使用type的同义词;在numpy的未来版本中,它将被理解为(type,(1,))/'(1,)type'。_np_qint16 = np.dtype([(“ qint16”,np.int16,1)])/home/user/.local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning:不建议使用(type,1)或'1type'作为type的同义词;在numpy的未来版本中,它将被理解为(type,(1,))/'(1,)type'。_np_quint16 = np.dtype([[“” quint16“,np.uint16,1)])/home/user/.local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning:不建议使用(type,1)或'1type'作为type的同义词;在numpy的未来版本中,它将被理解为(type,(1,))/'(1,)type'。_np_qint32 = np.dtype([(“
我有以下版本-numpy- 1.17.0 tensorflow- 1.14.0
如何解决张量流中不同数据类型的警告?
谢谢!
我有一个numpy一维数组,其中包含Trueor的布尔值False。我想检查是否所有的元素都返回False一个,无论数组中的所有元素是 False 还是 TrueTrue/Falsenumpy
x = np.array([False, False, False]) # this should return True, since all values are False
y = np.array([True, True, True]) # this should return False, since all values are True
z = np.array([True, False, True]) # this should return False, since not all values are False
Run Code Online (Sandbox Code Playgroud)
我调查过np.all(),但这并不能解决我的问题。
谢谢!
我正在阅读有关Delaunay (scipy)的内容的文章并发现了代码:
\nimport numpy as np\npoints = np.array([[0, 0], [0, 1.1], [1, 0], [1, 1]])\n\nfrom scipy.spatial import Delaunay\ntri = Delaunay(points)\n\nimport matplotlib.pyplot as plt\nplt.triplot(points[:,0], points[:,1], tri.simplices.copy())\nplt.plot(points[:,0], points[:,1], \'o\')\nplt.show()\nRun Code Online (Sandbox Code Playgroud)\n据我了解,单纯形是三角形到更高维度的推广。
\n我不明白下面代码的含义,希望帮助理解它:
\n# Point indices and coordinates for the two triangles forming the triangulation:\n\ntri.simplices\narray([[3, 2, 0],\n [3, 1, 0]], dtype=int32)\n\npoints[tri.simplices]\narray([[[ 1. , 1. ],\n [ 1. , 0. ],\n [ 0. , 0. ]],\n [[ 1. , 1. ],\n [ 0. , 1.1],\n [ 0. , 0. …Run Code Online (Sandbox Code Playgroud) 我使用 Python 3.8 和 PyTorch 1.7 手动分配和更改神经网络的权重和偏差。作为示例,我定义了一个 LeNet-300-100 全连接神经网络来在 MNIST 数据集上进行训练。类定义的代码为:
class LeNet300(nn.Module):
def __init__(self):
super(LeNet300, self).__init__()
# Define layers-
self.fc1 = nn.Linear(in_features = input_size, out_features = 300)
self.fc2 = nn.Linear(in_features = 300, out_features = 100)
self.output = nn.Linear(in_features = 100, out_features = 10)
self.weights_initialization()
def forward(self, x):
out = F.relu(self.fc1(x))
out = F.relu(self.fc2(out))
return self.output(out)
def weights_initialization(self):
'''
When we define all the modules such as the layers in '__init__()'
method above, these are all stored in 'self.modules()'.
We …Run Code Online (Sandbox Code Playgroud) 我正在尝试为(Statlog)航天飞机数据集训练多层前馈神经网络-
这是一个多类分类任务。目标属性是“Class”。
我的代码如下-
# Column names to be used for training and testing sets-
col_names = ['A1', 'A2', 'A3', 'A4', 'A5', 'A6', 'A7', 'A8', 'A9', 'Class']
# Read in training and testing datasets-
training_data = pd.read_csv("shuttle_training.csv", delimiter = ' ', names = col_names)
testing_data = pd.read_csv("shuttle_test.csv", delimiter = ' ', names = col_names)
print("\nTraining data dimension = {0} and testing data dimension = {1}\n".format(training_data.shape, testing_data.shape))
# Training data dimension = (43500, 10) and testing data dimension = (14500, 10) …Run Code Online (Sandbox Code Playgroud) 我必须使用学习率预热,开始为 CIFAR-10 训练 VGG-19 CNN,并使用学习率预热在前 10000 次迭代(或大约 13 个时期)中将学习率从 0.00001 预热到 0.1。然后,对于剩余的训练,您使用 0.01 的学习率,其中学习率衰减用于在 80 和 120 epoch 时将学习率降低 10 倍。该模型总共需要训练 144 个 epoch。
我使用的是 Python 3 和 TensorFlow2,其中训练数据集有 50000 个示例,批量大小 = 64。一个时期内的训练迭代次数 = 50000/64 = 781 次迭代(大约)。如何在代码中同时使用学习率预热和学习率衰减?
目前,我正在使用学习率衰减:
boundaries = [100000, 110000]
values = [1.0, 0.5, 0.1]
learning_rate_fn = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries, values)
print("\nCurrent step value: {0}, LR: {1:.6f}\n".format(optimizer.iterations.numpy(), optimizer.learning_rate(optimizer.iterations)))
Run Code Online (Sandbox Code Playgroud)
但是,我不知道如何使用学习率预热和学习率衰减。
帮助?
我使用 PyTorch Lightning 版本 1.4.0 并为数据集定义了以下类:
class CustomTrainDataset(Dataset):
'''
Custom PyTorch Dataset for training
Args:
data (pd.DataFrame) - DF containing product info (and maybe also ratings)
all_itemIds (list) - Python3 list containing all Item IDs
'''
def __init__(self, data, all_orderIds):
self.users, self.items, self.labels = self.get_dataset(data, all_orderIds)
def __len__(self):
return len(self.users)
def __getitem__(self, idx):
return self.users[idx], self.items[idx], self.labels[idx]
def get_dataset(self, data, all_orderIds):
users, items, labels = [], [], []
user_item_set = set(zip(train_ratings['CustomerID'], train_ratings['ItemCode']))
num_negatives = 7
for u, i in user_item_set: …Run Code Online (Sandbox Code Playgroud) 我编写了一个简单的 C++ 程序,如下所示-
#include<iostream>
using namespace std;
class Rectangle
{
double length, breadth;
public:
Rectangle(void); // constructor overloading
Rectangle(double, double); // constructor of class
// void set_values(double l, double b);
double area(void);
}; // can provide an object name here
// default constructor of class 'Rectangle'-
Rectangle::Rectangle(void)
{
length = 5;
breadth = 5;
}
// constructor of class 'Rectangle'-
Rectangle::Rectangle(double l, double b)
{
length = l;
breadth = b;
}
/*
void Rectangle::set_values(double l, double b)
{
length …Run Code Online (Sandbox Code Playgroud) 我正在尝试使用Keras教程TensorFlow 2.0 Magnitude-based weight pruning 并遇到参数initial_sparsity
import tensorflow_model_optimization as tfmot
from tensorflow_model_optimization.sparsity import keras as sparsity
import numpy as np
epochs = 12
num_train_samples = x_train.shape[0]
end_step = np.ceil(1.0 * num_train_samples / batch_size).astype(np.int32) * epochs
print('End step: ' + str(end_step))
pruning_params = {
'pruning_schedule': sparsity.PolynomialDecay(initial_sparsity=0.50,
final_sparsity=0.90,
begin_step=2000,
end_step=end_step,
frequency=100)
}
Run Code Online (Sandbox Code Playgroud)
教程说:
这里使用的参数是指:
Spasity PolynomialDecay 用于整个训练过程。我们从 50% 的稀疏度开始,逐渐训练模型达到 90% 的稀疏度。X% 稀疏性意味着 X% 的权重张量将被修剪掉。
我的问题是,您不应该从0% 的initial_sparsity开始,然后剪掉 90% 的权重吗?
以 50% 的initial_sparsity开头是什么意思?这是否意味着先修剪 50% 的权重,然后实现 90% …
python ×6
python-3.x ×3
pytorch ×3
numpy ×2
tensorflow ×2
c++ ×1
delaunay ×1
keras ×1
scipy ×1