GRS*_*GRS 5 python machine-learning lstm keras tensorflow
我想为我的分类特征构建一个带有嵌入的LSTM模型。我目前具有数值特征和一些分类特征,例如位置,pd.get_dummies()由于计算复杂性,因此无法进行一次热编码,这是我本来打算这样做的。
我们来看一个例子:
data = {
'user_id': [1,1,1,1,2,2,3],
'time_on_page': [10,20,30,20,15,10,40],
'location': ['London','New York', 'London', 'New York', 'Hong Kong', 'Tokyo', 'Madrid'],
'page_id': [5,4,2,1,6,8,2]
}
d = pd.DataFrame(data=data)
print(d)
user_id time_on_page location page_id
0 1 10 London 5
1 1 20 New York 4
2 1 30 London 2
3 1 20 New York 1
4 2 15 Hong Kong 6
5 2 10 Tokyo 8
6 3 40 Madrid 2
Run Code Online (Sandbox Code Playgroud)
让我们看看访问网站的人。我正在跟踪诸如页面停留时间等数字数据。分类数据包括:位置(超过1000个唯一性),Page_id(大于1000个唯一性),Author_id(超过100个唯一性)。最简单的解决方案是对所有内容进行一次热编码,然后将其放入具有可变序列长度的LSTM中,每个时间步对应于不同的页面视图。
上面的DataFrame将生成7个训练样本,其序列长度可变。例如,user_id=2我将有2个训练样本:
[ ROW_INDEX_4 ] and [ ROW_INDEX_4, ROW_INDEX_5 ]
Run Code Online (Sandbox Code Playgroud)
让我们X作为训练数据,让我们看一下第一个训练样本X[0]。
从上图可以看出,我的分类特征是X[0][:, n:]。
在创建序列之前,我[0,1... number_of_cats-1]使用来将分类变量分解为,使用中pd.factorize()的数据X[0][:, n:]是对应于其索引的数字。
我是否需要分别Embedding为每个分类功能创建一个?例如每个嵌入x_*n, x_*n+1, ..., x_*m?
如果是这样,如何将其放入Keras代码中?
model = Sequential()
model.add(Embedding(?, ?, input_length=variable)) # How do I feed the data into this embedding? Only the categorical inputs.
model.add(LSTM())
model.add(Dense())
model.add.Activation('sigmoid')
model.compile()
model.fit_generator() # fits the `X[i]` one by one of variable length sequences.
Run Code Online (Sandbox Code Playgroud)
我的解决办法:
看起来像这样:
我可以在每个单独的分类特征(mn)上训练Word2Vec模型,以向量化任何给定的值。例如,伦敦将在3个维度上进行矢量化处理。假设我使用3维嵌入。然后,我将所有内容放回X矩阵,该矩阵现在将具有n + 3(nm),并使用LSTM模型对其进行训练?
我只是认为应该有一个更简单/更智能的方法。
正如您提到的,一种解决方案是对分类数据进行一次热编码(或以基于索引的格式按原样使用它们),然后将其沿数值数据馈送到LSTM层。当然,您在这里还可以有两个LSTM层,一个用于处理数字数据,另一个用于处理分类数据(以一种热编码格式或基于索引的格式),然后合并它们的输出。
另一种解决方案是为每个分类数据具有一个单独的嵌入层。每个嵌入层可能有其自己的嵌入尺寸(并且如上所述,您可能有多个LSTM层,分别用于处理数字和分类特征):
num_cats = 3 # number of categorical features
n_steps = 100 # number of timesteps in each sample
n_numerical_feats = 10 # number of numerical features in each sample
cat_size = [1000, 500, 100] # number of categories in each categorical feature
cat_embd_dim = [50, 10, 100] # embedding dimension for each categorical feature
numerical_input = Input(shape=(n_steps, n_numerical_feats), name='numeric_input')
cat_inputs = []
for i in range(num_cats):
cat_inputs.append(Input(shape=(n_steps,1), name='cat' + str(i+1) + '_input'))
cat_embedded = []
for i in range(num_cats):
embed = TimeDistributed(Embedding(cat_size[i], cat_embd_dim[i]))(cat_inputs[i])
cat_embedded.append(embed)
cat_merged = concatenate(cat_embedded)
cat_merged = Reshape((n_steps, -1))(cat_merged)
merged = concatenate([numerical_input, cat_merged])
lstm_out = LSTM(64)(merged)
model = Model([numerical_input] + cat_inputs, lstm_out)
model.summary()
Run Code Online (Sandbox Code Playgroud)
这是模型摘要:
Layer (type) Output Shape Param # Connected to
==================================================================================================
cat1_input (InputLayer) (None, 100, 1) 0
__________________________________________________________________________________________________
cat2_input (InputLayer) (None, 100, 1) 0
__________________________________________________________________________________________________
cat3_input (InputLayer) (None, 100, 1) 0
__________________________________________________________________________________________________
time_distributed_1 (TimeDistrib (None, 100, 1, 50) 50000 cat1_input[0][0]
__________________________________________________________________________________________________
time_distributed_2 (TimeDistrib (None, 100, 1, 10) 5000 cat2_input[0][0]
__________________________________________________________________________________________________
time_distributed_3 (TimeDistrib (None, 100, 1, 100) 10000 cat3_input[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 100, 1, 160) 0 time_distributed_1[0][0]
time_distributed_2[0][0]
time_distributed_3[0][0]
__________________________________________________________________________________________________
numeric_input (InputLayer) (None, 100, 10) 0
__________________________________________________________________________________________________
reshape_1 (Reshape) (None, 100, 160) 0 concatenate_1[0][0]
__________________________________________________________________________________________________
concatenate_2 (Concatenate) (None, 100, 170) 0 numeric_input[0][0]
reshape_1[0][0]
__________________________________________________________________________________________________
lstm_1 (LSTM) (None, 64) 60160 concatenate_2[0][0]
==================================================================================================
Total params: 125,160
Trainable params: 125,160
Non-trainable params: 0
__________________________________________________________________________________________________
Run Code Online (Sandbox Code Playgroud)
但是,您可以尝试另一种解决方案:仅对所有分类功能使用一个嵌入层。但是,它涉及一些预处理:您需要重新索引所有类别以使其彼此区分。例如,第一个分类特征中的类别将从1编号为size_first_cat,然后第二个分类特征中的类别将从编号size_first_cat + 1为size_first_cat + size_second_cat,依此类推。但是,在此解决方案中,所有分类特征都将具有相同的嵌入维,因为我们仅使用一个嵌入层。
更新:现在,考虑到这一点,您还可以在数据预处理阶段甚至在模型中重塑分类特征,以摆脱TimeDistributed层和Reshape层(这也可以提高训练速度):
numerical_input = Input(shape=(n_steps, n_numerical_feats), name='numeric_input')
cat_inputs = []
for i in range(num_cats):
cat_inputs.append(Input(shape=(n_steps,), name='cat' + str(i+1) + '_input'))
cat_embedded = []
for i in range(num_cats):
embed = Embedding(cat_size[i], cat_embd_dim[i])(cat_inputs[i])
cat_embedded.append(embed)
cat_merged = concatenate(cat_embedded)
merged = concatenate([numerical_input, cat_merged])
lstm_out = LSTM(64)(merged)
model = Model([numerical_input] + cat_inputs, lstm_out)
Run Code Online (Sandbox Code Playgroud)
至于拟合模型,您需要分别为其每个输入层提供其自身对应的numpy数组,例如:
X_tr_numerical = X_train[:,:,:n_numerical_feats]
# extract categorical features: you can use a for loop to this as well.
# note that we reshape categorical features to make them consistent with the updated solution
X_tr_cat1 = X_train[:,:,cat1_idx].reshape(-1, n_steps)
X_tr_cat2 = X_train[:,:,cat2_idx].reshape(-1, n_steps)
X_tr_cat3 = X_train[:,:,cat3_idx].reshape(-1, n_steps)
# don't forget to compile the model ...
# fit the model
model.fit([X_tr_numerical, X_tr_cat1, X_tr_cat2, X_tr_cat3], y_train, ...)
# or you can use input layer names instead
model.fit({'numeric_input': X_tr_numerical,
'cat1_input': X_tr_cat1,
'cat2_input': X_tr_cat2,
'cat3_input': X_tr_cat3}, y_train, ...)
Run Code Online (Sandbox Code Playgroud)
如果您要使用fit_generator(),则没有区别:
# if you are using a generator
def my_generator(...):
# prep the data ...
yield [batch_tr_numerical, batch_tr_cat1, batch_tr_cat2, batch_tr_cat3], batch_tr_y
# or use the names
yield {'numeric_input': batch_tr_numerical,
'cat1_input': batch_tr_cat1,
'cat2_input': batch_tr_cat2,
'cat3_input': batch_tr_cat3}, batch_tr_y
model.fit_generator(my_generator(...), ...)
# or if you are subclassing Sequence class
class MySequnece(Sequence):
def __init__(self, x_set, y_set, batch_size):
# initialize the data
def __getitem__(self, idx):
# fetch data for the given batch index (i.e. idx)
# same as the generator above but use `return` instead of `yield`
model.fit_generator(MySequence(...), ...)
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
1301 次 |
| 最近记录: |