我正在使用Keras构建一个网络.在这个过程中,我需要一个层,它接受LSTM输入,什么都不做,只输出与输入完全相同.即如果LSTM的每个输入记录都像[[A_t1,A_t2,A_t3,A_t4,A_t5,A_t6]]那样,我正在寻找一个层:
model.add(SomeIdentityLayer(x))
Run Code Online (Sandbox Code Playgroud)
SomeIdentityLayer(x)将[[A_t1, A_t2, A_t3, A_t4, A_t5, A_t6]]
作为输入和输出[[A_t1, A_t2, A_t3, A_t4, A_t5, A_t6]]
.Keras有这样的层/结构吗?谢谢!
我有以下数据框df
:
time col_A
0 1520582580.000 79.000
1 1520582880.000 22.500
2 1520583180.000 29.361
3 1520583480.000 116.095
4 1520583780.000 19.972
5 1520584080.000 36.857
6 1520584380.000 15.167
7 1520584680.000 nan
8 1520584980.000 nan
9 1520585280.000 nan
10 1520585580.000 34.500
11 1520585880.000 17.583
12 1520586180.000 nan
13 1520586480.000 48.833
14 1520586780.000 18.806
15 1520587080.000 18.583
Run Code Online (Sandbox Code Playgroud)
col_A
有一些缺失的数据。我想创建一个col_B
,它采用每个丢失记录的先前值。IE
6 1520584380.000 15.167
7 1520584680.000 15.167
8 1520584980.000 15.167
9 1520585280.000 15.167
10 1520585580.000 34.500
11 1520585880.000 17.583
12 1520586180.000 17.583
13 1520586480.000 …
Run Code Online (Sandbox Code Playgroud) 我有以下使用功能 API 的 Keras LSTM 模型:
model = Sequential()
model.add(Lambda(lambda x: x,input_shape=(timestep,n_feature)))
output = model.output
output = LSTM(8)(output)
output = Dense(2)(output)
inputTensor = model.input
myModel = Model([inputTensor], output)
myModel.compile(loss='mean_squared_error', optimizer='adam')
myModel.fit([trainX], trainY, epochs=100, batch_size=1, verbose=2, validation_split = 0.1)
Run Code Online (Sandbox Code Playgroud)
该模型工作正常,但我认为我的架构中有多余的语法。例如,Lambda 层仅用于定义 input_shape,也许可以将其删除?上面的代码可以简化/清理吗(我想继续使用函数式API)?谢谢!
我正在使用PyTorch训练cnn模型。这是我的网络架构:
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.nn.init as I
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 32, 5)
self.pool = nn.MaxPool2d(2,2)
self.conv1_bn = nn.BatchNorm2d(32)
self.conv2 = nn.Conv2d(32, 64, 5)
self.conv2_drop = nn.Dropout2d()
self.conv2_bn = nn.BatchNorm2d(64)
self.fc1 = torch.nn.Linear(53*53*64, 256)
self.fc2 = nn.Linear(256, 136)
def forward(self, x):
x = F.relu(self.conv1_bn(self.pool(self.conv1(x))))
x = F.relu(self.conv2_bn(self.pool(self.conv2_drop(self.conv2(x)))))
x = x.view(-1, 53*53*64)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return x …
Run Code Online (Sandbox Code Playgroud) 我正在使用 PyTorch 构建一个 DecoderRNN(这是一个图像字幕解码器):
class DecoderRNN(nn.Module):
def __init__(self, embed_size, hidden_size, vocab_size):
super(DecoderRNN, self).__init__()
self.hidden_size = hidden_size
self.gru = nn.GRU(embed_size, hidden_size, hidden_size)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, features, captions):
print (features.shape)
print (captions.shape)
output, hidden = self.gru(features, captions)
output = self.softmax(self.out(output[0]))
return output, hidden
Run Code Online (Sandbox Code Playgroud)
数据具有以下形状:
torch.Size([10, 200]) <- features.shape (10 for batch size)
torch.Size([10, 12]) <- captions.shape (10 for batch size)
Run Code Online (Sandbox Code Playgroud)
然后我收到以下错误。有什么想法我在这里错过了吗?谢谢!
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-2-76e05ba08b1d> in <module>()
44 # Pass the inputs through the CNN-RNN model. …
Run Code Online (Sandbox Code Playgroud) 我正在使用 ipdb 调试我的 python,如下所示:
python -m ipdb my_test.py -d my_input_config -o my_output
Run Code Online (Sandbox Code Playgroud)
并得到以下错误:
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/runpy.py:125: RuntimeWarning: 'ipdb.__main__' found in sys.modules after import of package 'ipdb', but prior to execution of 'ipdb.__main__'; this may result in unpredictable behaviour
warn(RuntimeWarning(msg))
Run Code Online (Sandbox Code Playgroud)
这是什么意思,我该如何解决?谢谢!
我试图将RDD映射到scala中的pairRDD,所以我可以稍后使用reduceByKey.这是我做的:
userRecords是org.apache.spark.rdd.RDD [UserElement]
我尝试从userRecords创建一个pairRDD,如下所示:
val userPairs: PairRDDFunctions[String, UserElement] = userRecords.map { t =>
val nameKey: String = t.getName()
(nameKey, t)
}
Run Code Online (Sandbox Code Playgroud)
但是,我得到了错误:
类型不匹配; 发现:org.apache.spark.rdd.RDD [(String,com.mypackage.UserElement)]必需:org.apache.spark.rdd.PairRDDFunctions [String,com.mypackage.UserElement]
我在这里错过了什么?非常感谢!
我有以下代码,我希望我的函数具有泛型返回类型:
object myUtility {
def myFunction(input1:String, input2:String, returnType: T): T = {
:
:
}
Run Code Online (Sandbox Code Playgroud)
什么应该是正确的语法,我应该导入什么才能实现这一目标?非常感谢你!
我正在运行一个火花作业,我没有足够的空间来缓存内存中的rdd_128_17000警告.然而,在附件中,它显然只说90.8摹出719.3 g ^使用.这是为什么?谢谢!
15/10/16 02:19:41 WARN storage.MemoryStore: Not enough space to cache rdd_128_17000 in memory! (computed 21.4 GB so far)
15/10/16 02:19:41 INFO storage.MemoryStore: Memory use = 4.1 GB (blocks) + 21.2 GB (scratch space shared across 1 thread(s)) = 25.2 GB. Storage limit = 36.0 GB.
15/10/16 02:19:44 WARN storage.MemoryStore: Not enough space to cache rdd_129_17000 in memory! (computed 9.4 GB so far)
15/10/16 02:19:44 INFO storage.MemoryStore: Memory use = 4.1 GB (blocks) + 30.6 …
Run Code Online (Sandbox Code Playgroud) 是否可以检索RDD的模式并将其存储在变量中?因为我想使用相同的模式从另一个RDD创建一个新的数据框.例如,以下是我希望拥有的内容:
val schema = oldDF.getSchema()
val newDF = sqlContext.createDataFrame(rowRDD, schema)
Run Code Online (Sandbox Code Playgroud)
假设我已经有rowRDD
格式RDD[org.apache.spark.sql.Row]
,这可能吗?
apache-spark ×3
keras ×3
python-3.x ×3
scala ×3
java ×2
lstm ×2
pytorch ×2
rdd ×2
amazon-s3 ×1
dataframe ×1
generics ×1
ipdb ×1
keras-2 ×1
keras-layer ×1
pandas ×1
python ×1
return-type ×1
rnn ×1
tensorflow ×1