alv*_*vas 219 python numpy machine-learning logistic-regression softmax
从Udacity的深度学习类中,y_i的softmax只是指数除以整个Y向量的指数之和:
哪里S(y_i)是的SOFTMAX功能y_i,并e为指数和j是否定的.输入向量Y中的列数.
我尝试过以下方法:
import numpy as np
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum()
scores = [3.0, 1.0, 0.2]
print(softmax(scores))
Run Code Online (Sandbox Code Playgroud)
返回:
[ 0.8360188 0.11314284 0.05083836]
Run Code Online (Sandbox Code Playgroud)
但建议的解决方案是:
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
return np.exp(x) / np.sum(np.exp(x), axis=0)
Run Code Online (Sandbox Code Playgroud)
它产生与第一个实现相同的输出,即使第一个实现显式获取每列和最大值的差异,然后除以总和.
有人可以用数学方式显示原因吗?一个是正确的而另一个是错的吗?
实现在代码和时间复杂性方面是否相似?哪个更有效率?
Tre*_*eld 119
它们都是正确的,但从数值稳定性的角度来看,你的首选是正确的.
你从一开始
e ^ (x - max(x)) / sum(e^(x - max(x))
Run Code Online (Sandbox Code Playgroud)
通过使用a ^(b - c)=(a ^ b)/(a ^ c)的事实
= e ^ x / (e ^ max(x) * sum(e ^ x / e ^ max(x)))
= e ^ x / sum(e ^ x)
Run Code Online (Sandbox Code Playgroud)
这是另一个答案所说的.你可以用任何变量替换max(x),它会被取消.
des*_*aut 90
(嗯......这里有很多混乱,无论是问题还是答案......)
首先,两种解决方案(即你的和推荐的解决方案)并不等同; 它们碰巧只相当于1-D得分数组的特殊情况.如果你在Udacity测验提供的例子中尝试了2-D得分数组,你会发现它.
从结果来看,两种解决方案之间唯一的实际差异就是axis=0论证.要看到这种情况,让我们尝试你的解决方案(your_softmax)和唯一的区别是axis参数:
import numpy as np
# your solution:
def your_softmax(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum()
# correct solution:
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum(axis=0) # only difference
Run Code Online (Sandbox Code Playgroud)
正如我所说,对于1-D得分阵列,结果确实相同:
scores = [3.0, 1.0, 0.2]
print(your_softmax(scores))
# [ 0.8360188 0.11314284 0.05083836]
print(softmax(scores))
# [ 0.8360188 0.11314284 0.05083836]
your_softmax(scores) == softmax(scores)
# array([ True, True, True], dtype=bool)
Run Code Online (Sandbox Code Playgroud)
然而,以下是Udacity测验中给出的2-D得分数组的结果作为测试示例:
scores2D = np.array([[1, 2, 3, 6],
[2, 4, 5, 6],
[3, 8, 7, 6]])
print(your_softmax(scores2D))
# [[ 4.89907947e-04 1.33170787e-03 3.61995731e-03 7.27087861e-02]
# [ 1.33170787e-03 9.84006416e-03 2.67480676e-02 7.27087861e-02]
# [ 3.61995731e-03 5.37249300e-01 1.97642972e-01 7.27087861e-02]]
print(softmax(scores2D))
# [[ 0.09003057 0.00242826 0.01587624 0.33333333]
# [ 0.24472847 0.01794253 0.11731043 0.33333333]
# [ 0.66524096 0.97962921 0.86681333 0.33333333]]
Run Code Online (Sandbox Code Playgroud)
结果是不同的 - 第二个确实与Udacity测验中预期的相同,其中所有列确实总和为1,而第一个(错误)结果则不是这样.
所以,所有的大惊小怪实际上是一个实现细节 - axis争论.根据numpy.sum文档:
默认值axis = None将汇总输入数组的所有元素
而在这里,我们想要按行加总,因此axis=0.对于一维数组,(唯一)行和所有元素之和的总和恰好相同,因此在这种情况下你的结果相同......
除了axis问题,你的实现(即你选择减去最大值)实际上比建议的解决方案更好!实际上,它是实现softmax函数的推荐方法 - 请参见此处的理由(数值稳定性,上面的一些答案也指出了这一点).
Chu*_*ive 47
所以,这真是对desertnaut答案的评论,但由于我的声誉,我无法对此发表评论.正如他所指出的,如果您的输入包含单个样本,则您的版本才是正确的.如果您的输入包含多个样本,那就错了.然而,desertnaut的解决方案也是错误的.问题是,一旦他采取一维输入,然后他采取二维输入.让我告诉你.
import numpy as np
# your solution:
def your_softmax(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum()
# desertnaut solution (copied from his answer):
def desertnaut_softmax(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum(axis=0) # only difference
# my (correct) solution:
def softmax(z):
assert len(z.shape) == 2
s = np.max(z, axis=1)
s = s[:, np.newaxis] # necessary step to do broadcasting
e_x = np.exp(z - s)
div = np.sum(e_x, axis=1)
div = div[:, np.newaxis] # dito
return e_x / div
Run Code Online (Sandbox Code Playgroud)
让我们举例说:
x1 = np.array([[1, 2, 3, 6]]) # notice that we put the data into 2 dimensions(!)
Run Code Online (Sandbox Code Playgroud)
这是输出:
your_softmax(x1)
array([[ 0.00626879, 0.01704033, 0.04632042, 0.93037047]])
desertnaut_softmax(x1)
array([[ 1., 1., 1., 1.]])
softmax(x1)
array([[ 0.00626879, 0.01704033, 0.04632042, 0.93037047]])
Run Code Online (Sandbox Code Playgroud)
您可以看到desernauts版本在这种情况下会失败.(如果输入只是像np.array那样的一维([1,2,3,6]).
现在让我们使用3个样本,因为这就是我们使用2维输入的原因.以下x2与desernauts示例中的x2不同.
x2 = np.array([[1, 2, 3, 6], # sample 1
[2, 4, 5, 6], # sample 2
[1, 2, 3, 6]]) # sample 1 again(!)
Run Code Online (Sandbox Code Playgroud)
此输入包含一个包含3个样本的批次.但样本一和三基本相同.我们现在期望3行softmax激活,其中第一行应该与第三行相同,也与我们激活x1相同!
your_softmax(x2)
array([[ 0.00183535, 0.00498899, 0.01356148, 0.27238963],
[ 0.00498899, 0.03686393, 0.10020655, 0.27238963],
[ 0.00183535, 0.00498899, 0.01356148, 0.27238963]])
desertnaut_softmax(x2)
array([[ 0.21194156, 0.10650698, 0.10650698, 0.33333333],
[ 0.57611688, 0.78698604, 0.78698604, 0.33333333],
[ 0.21194156, 0.10650698, 0.10650698, 0.33333333]])
softmax(x2)
array([[ 0.00626879, 0.01704033, 0.04632042, 0.93037047],
[ 0.01203764, 0.08894682, 0.24178252, 0.65723302],
[ 0.00626879, 0.01704033, 0.04632042, 0.93037047]])
Run Code Online (Sandbox Code Playgroud)
我希望你能看到这只是我解决方案的情况.
softmax(x1) == softmax(x2)[0]
array([[ True, True, True, True]], dtype=bool)
softmax(x1) == softmax(x2)[2]
array([[ True, True, True, True]], dtype=bool)
Run Code Online (Sandbox Code Playgroud)
另外,这是TensorFlows softmax实现的结果:
import tensorflow as tf
import numpy as np
batch = np.asarray([[1,2,3,6],[2,4,5,6],[1,2,3,6]])
x = tf.placeholder(tf.float32, shape=[None, 4])
y = tf.nn.softmax(x)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(y, feed_dict={x: batch})
Run Code Online (Sandbox Code Playgroud)
结果如下:
array([[ 0.00626879, 0.01704033, 0.04632042, 0.93037045],
[ 0.01203764, 0.08894681, 0.24178252, 0.657233 ],
[ 0.00626879, 0.01704033, 0.04632042, 0.93037045]], dtype=float32)
Run Code Online (Sandbox Code Playgroud)
Rom*_*rac 24
sklearn还提供softmax的实现
from sklearn.utils.extmath import softmax
import numpy as np
x = np.array([[ 0.50839931, 0.49767588, 0.51260159]])
softmax(x)
# output
array([[ 0.3340521 , 0.33048906, 0.33545884]])
Run Code Online (Sandbox Code Playgroud)
Sal*_*ali 11
从数学的角度来看,双方是平等的.
你可以很容易地证明这一点.我们来吧m=max(x).现在你的函数softmax返回一个向量,其第i个坐标等于
注意这适用于任何m,因为所有(甚至复杂的)数字e^m != 0
从计算复杂性的角度来看,它们也是等价的,并且都在O(n)时间上运行,其中n是矢量的大小.
从数值稳定性的角度来看,第一种解决方案是首选,因为e^x增长速度非常快,即使是非常小的值x也会溢出.减去最大值可以消除这种溢出.实际上,我正在谈论的东西试图x = np.array([1000, 5])融入你的两个功能.一个将返回正确的概率,第二个将溢出nan
与问题无关,但您的解决方案仅适用于矢量(Udacity测验也希望您为矩阵计算它).为了解决它你需要使用sum(axis=0)
编辑.从版本1.2.0开始,scipy包含softmax作为特殊功能:
https://scipy.github.io/devdocs/generated/scipy.special.softmax.html
我写了一个函数在任何轴上应用softmax:
def softmax(X, theta = 1.0, axis = None):
"""
Compute the softmax of each element along an axis of X.
Parameters
----------
X: ND-Array. Probably should be floats.
theta (optional): float parameter, used as a multiplier
prior to exponentiation. Default = 1.0
axis (optional): axis to compute values along. Default is the
first non-singleton axis.
Returns an array the same size as X. The result will sum to 1
along the specified axis.
"""
# make X at least 2d
y = np.atleast_2d(X)
# find axis
if axis is None:
axis = next(j[0] for j in enumerate(y.shape) if j[1] > 1)
# multiply y against the theta parameter,
y = y * float(theta)
# subtract the max for numerical stability
y = y - np.expand_dims(np.max(y, axis = axis), axis)
# exponentiate y
y = np.exp(y)
# take the sum along the specified axis
ax_sum = np.expand_dims(np.sum(y, axis = axis), axis)
# finally: divide elementwise
p = y / ax_sum
# flatten if X was 1D
if len(X.shape) == 1: p = p.flatten()
return p
Run Code Online (Sandbox Code Playgroud)
如其他用户所描述的那样减去最大值是很好的做法.我在这里写了一篇关于它的详细帖子.
我很好奇这些之间的性能差异
\nimport numpy as np\n\ndef softmax(x):\n """Compute softmax values for each sets of scores in x."""\n return np.exp(x) / np.sum(np.exp(x), axis=0)\n\ndef softmaxv2(x):\n """Compute softmax values for each sets of scores in x."""\n e_x = np.exp(x - np.max(x))\n return e_x / e_x.sum()\n\ndef softmaxv3(x):\n """Compute softmax values for each sets of scores in x."""\n e_x = np.exp(x - np.max(x))\n return e_x / np.sum(e_x, axis=0)\n\ndef softmaxv4(x):\n """Compute softmax values for each sets of scores in x."""\n return np.exp(x - np.max(x)) / np.sum(np.exp(x - np.max(x)), axis=0)\n\n\n\nx=[10,10,18,9,15,3,1,2,1,10,10,10,8,15]\nRun Code Online (Sandbox Code Playgroud)\n使用
\nprint("----- softmax")\n%timeit a=softmax(x)\nprint("----- softmaxv2")\n%timeit a=softmaxv2(x)\nprint("----- softmaxv3")\n%timeit a=softmaxv2(x)\nprint("----- softmaxv4")\n%timeit a=softmaxv2(x)\nRun Code Online (Sandbox Code Playgroud)\n增加 x (+100 +200 +500...) 内的值,我使用原始 numpy 版本得到了一致更好的结果(这里只是一个测试)
\n----- softmax\nThe slowest run took 8.07 times longer than the fastest. This could mean that an intermediate result is being cached.\n100000 loops, best of 3: 17.8 \xc2\xb5s per loop\n----- softmaxv2\nThe slowest run took 4.30 times longer than the fastest. This could mean that an intermediate result is being cached.\n10000 loops, best of 3: 23 \xc2\xb5s per loop\n----- softmaxv3\nThe slowest run took 4.06 times longer than the fastest. This could mean that an intermediate result is being cached.\n10000 loops, best of 3: 23 \xc2\xb5s per loop\n----- softmaxv4\n10000 loops, best of 3: 23 \xc2\xb5s per loop\nRun Code Online (Sandbox Code Playgroud)\n直到... x 内的值达到~800,然后我得到
\n----- softmax\n/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:4: RuntimeWarning: overflow encountered in exp\n after removing the cwd from sys.path.\n/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:4: RuntimeWarning: invalid value encountered in true_divide\n after removing the cwd from sys.path.\nThe slowest run took 18.41 times longer than the fastest. This could mean that an intermediate result is being cached.\n10000 loops, best of 3: 23.6 \xc2\xb5s per loop\n----- softmaxv2\nThe slowest run took 4.18 times longer than the fastest. This could mean that an intermediate result is being cached.\n10000 loops, best of 3: 22.8 \xc2\xb5s per loop\n----- softmaxv3\nThe slowest run took 19.44 times longer than the fastest. This could mean that an intermediate result is being cached.\n10000 loops, best of 3: 23.6 \xc2\xb5s per loop\n----- softmaxv4\nThe slowest run took 16.82 times longer than the fastest. This could mean that an intermediate result is being cached.\n10000 loops, best of 3: 22.7 \xc2\xb5s per loop\nRun Code Online (Sandbox Code Playgroud)\n正如一些人所说,您的版本“对于大量”来说在数值上更加稳定。对于少数人来说,情况可能恰恰相反。
\n要提供替代解决方案,请考虑以下情况:您的参数数量级极大,以致exp(x)会下溢(在负面情况下)或溢出(在正面情况下)。在这里,您希望尽可能长时间地保留在日志空间中,仅在您可以相信结果表现良好的最后才进行求幂。
import scipy.special as sc
import numpy as np
def softmax(x: np.ndarray) -> np.ndarray:
return np.exp(x - sc.logsumexp(x))
Run Code Online (Sandbox Code Playgroud)