什么是功能散列(散列技巧)?

Mag*_*gie 16 python hash vector machine-learning

我知道特征散列(散列技巧)用于减少维度并处理位向量的稀疏性,但我不明白它是如何工作的.任何人都可以向我解释这个.是否有任何python库可以进行功能散列?

谢谢.

BBD*_*Sys 7

在Pandas上,您可以使用以下内容:

import pandas as pd
import numpy as np

data = {'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'],
        'year': [2000, 2001, 2002, 2001, 2002],
        'pop': [1.5, 1.7, 3.6, 2.4, 2.9]}

data = pd.DataFrame(data)

def hash_col(df, col, N):
    cols = [col + "_" + str(i) for i in range(N)]
    def xform(x): tmp = [0 for i in range(N)]; tmp[hash(x) % N] = 1; return pd.Series(tmp,index=cols)
    df[cols] = df[col].apply(xform)
    return df.drop(col,axis=1)

print hash_col(data, 'state',4)
Run Code Online (Sandbox Code Playgroud)

输出将是

   pop  year  state_0  state_1  state_2  state_3
0  1.5  2000        0        1        0        0
1  1.7  2001        0        1        0        0
2  3.6  2002        0        1        0        0
3  2.4  2001        0        0        0        1
4  2.9  2002        0        0        0        1
Run Code Online (Sandbox Code Playgroud)

同样在系列级别,你可以

import numpy as np,os import sys,pandas as pd

def hash_col(df, col, N):
    df = df.replace('',np.nan)
    cols = [col + "_" + str(i) for i in range(N)]
    tmp = [0 for i in range(N)]
    tmp[hash(df.ix[col]) % N] = 1
    res = df.append(pd.Series(tmp,index=cols))
    return res.drop(col)

a = pd.Series(['new york',30,''],index=['city','age','test'])
b = pd.Series(['boston',30,''],index=['city','age','test'])

print hash_col(a,'city',10)
print hash_col(b,'city',10)
Run Code Online (Sandbox Code Playgroud)

这将适用于每个系列,列名将被假定为Pandas索引.它还用nan替换空字符串,并浮动所有内容.

age        30
test      NaN
city_0      0
city_1      0
city_2      0
city_3      0
city_4      0
city_5      0
city_6      0
city_7      1
city_8      0
city_9      0
dtype: object
age        30
test      NaN
city_0      0
city_1      0
city_2      0
city_3      0
city_4      0
city_5      1
city_6      0
city_7      0
city_8      0
city_9      0
dtype: object
Run Code Online (Sandbox Code Playgroud)

但是,如果有词汇表,并且您只想进行单热编码,则可以使用

import numpy as np
import pandas as pd, os
import scipy.sparse as sps

def hash_col(df, col, vocab):
    cols = [col + "=" + str(v) for v in vocab]
    def xform(x): tmp = [0 for i in range(len(vocab))]; tmp[vocab.index(x)] = 1; return pd.Series(tmp,index=cols)
    df[cols] = df[col].apply(xform)
    return df.drop(col,axis=1)

data = {'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'],
        'year': [2000, 2001, 2002, 2001, 2002],
        'pop': [1.5, 1.7, 3.6, 2.4, 2.9]}

df = pd.DataFrame(data)

df2 = hash_col(df, 'state', ['Ohio','Nevada'])

print sps.csr_matrix(df2)
Run Code Online (Sandbox Code Playgroud)

哪个会给

   pop  year  state=Ohio  state=Nevada
0  1.5  2000           1             0
1  1.7  2001           1             0
2  3.6  2002           1             0
3  2.4  2001           0             1
4  2.9  2002           0             1
Run Code Online (Sandbox Code Playgroud)

我还添加了最终数据帧的稀疏化.在增量设置中,我们可能没有预先遇到所有值(但我们以某种方式以某种方式获得了所有可能值的列表),可以使用上面的方法.增量ML方法在每个增量处需要相同数量的特征,因此单热编码必须在每个批处理中产生相同数量的行.


Yev*_*eny 4

在这里(抱歉,由于某种原因,我无法将其添加为评论。)此外,大规模多任务学习的特征哈希的第一页很好地解释了这一点。