我正在使用 AWS EC2 的实例。我想使用apt-get命令,但它抛出错误:“apt-get not found”
如何使用 apt-get 命令?
我正在使用张量流 2.3.0
我有一个 python 数据生成器-
import tensorflow as tf
import numpy as np
vocab = [1,2,3,4,5]
def create_generator():
'generates a random number from 0 to len(vocab)-1'
count = 0
while count < 4:
x = np.random.randint(0, len(vocab))
yield x
count +=1
Run Code Online (Sandbox Code Playgroud)
我把它变成了一个 tf.data.Dataset 对象
gen = tf.data.Dataset.from_generator(create_generator,
args=[],
output_types=tf.int32,
output_shapes = (), )
Run Code Online (Sandbox Code Playgroud)
现在我想使用map方法对项目进行子采样,这样 tf 生成器永远不会输出任何偶数。
def subsample(x):
'remove item if it is present in an even number [2,4]'
'''
#TODO
'''
return x
gen = gen.map(subsample)
Run Code Online (Sandbox Code Playgroud)
如何使用 …
python-3.x tensorflow tensorflow-datasets tensorflow2.0 tf.data.dataset
我有一个简单的 PySpark 数据框 df1-
df1 = spark.createDataFrame([
("u1", 1),
("u1", 2),
("u2", 3),
("u3", 4),
],
['user_id', 'var1'])
print(df1.printSchema())
df1.show(truncate=False)
Run Code Online (Sandbox Code Playgroud)
输出-
root
|-- user_id: string (nullable = true)
|-- var1: long (nullable = true)
None
+-------+----+
|user_id|var1|
+-------+----+
|u1 |1 |
|u1 |2 |
|u2 |3 |
|u3 |4 |
+-------+----+
Run Code Online (Sandbox Code Playgroud)
我有另一个 PySpark 数据框 df2-
df2 = spark.createDataFrame([
(1, 'f1'),
(2, 'f2'),
],
['var1', 'var2'])
print(df2.printSchema())
df2.show(truncate=False)
Run Code Online (Sandbox Code Playgroud)
输出-
root
|-- var1: long (nullable = true)
|-- var2: string (nullable = …
Run Code Online (Sandbox Code Playgroud) 我使用的是 TensorFlow 2.1.2 和 TensorBoard 2.4.1,
import os, shutil
import tensorflow as tf
print(tf.__version__)
SUMMARY_DIR = 'summary/testing_this'
if 1:
# SUMMARY_DIR is the path of the directory where the tensorboard SummaryWriter files are written
# the directory is removed, if it already exists
if os.path.exists(SUMMARY_DIR):
shutil.rmtree(SUMMARY_DIR)
train_summary_writer = tf.summary.create_file_writer(os.path.join(SUMMARY_DIR, 'train'))
test_summary_writer = tf.summary.create_file_writer(os.path.join(SUMMARY_DIR, 'test'))
train_summary_counter = 0
for i in range(100):
with train_summary_writer.as_default():
tf.summary.scalar('train/sampled-softmax loss', i+5, step=train_summary_counter)
train_summary_counter += 1
Run Code Online (Sandbox Code Playgroud)
上面写的代码工作得很好。但是当我升级到 TensorFlow 2.3.0 时,抛出了以下错误——
Serving TensorBoard on localhost; to expose …
Run Code Online (Sandbox Code Playgroud) 当我尝试将分支合并到 master 时,我预计会发生合并冲突。这两个分支都有一个文本文件,其中包含不同的文本。
# make project directory
mkdir projA
cd projA
# initialize git repo
git init
# make commit in master branch
echo "text 1" > fileA.txt
git add .
git commit -m "commit A"
# make commit in a new branch
git checkout -b branch1
echo "text 2" > fileA.txt
git add .
git commit -m "commit B"
# merge branch into master
git checkout master
git merge branch1
Run Code Online (Sandbox Code Playgroud)
但合并命令只是进行快进合并,并保留 brach1 的 txt 文件中存在的文本,而不是 master 分支中的文本。
有人可以向我解释一下为什么 git …
tensorflow ×2
amazon-ec2 ×1
apache-spark ×1
apt-get ×1
dataframe ×1
git ×1
git-merge ×1
github ×1
pyspark ×1
python-3.x ×1
tensorboard ×1