lambda 函数中的“_”是什么意思以及为什么使用它?

Zhe*_*Xin 1 python lambda

我有一个以“_”作为参数的匿名函数,我不知道它的含义以及为什么在这里使用它。

函数是:

f = lambda _: model.loss(X, y)[0]

grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)

模型损失:

def loss(self, X, y=None):


    # Unpack variables from the params dictionary
    W1, b1 = self.params['W1'], self.params['b1']
    W2, b2 = self.params['W2'], self.params['b2']

    h1, h1_cache = affine_relu_forward(X, W1, b1)
    scores, h2_cache = affine_forward(h1, W2, b2)


    # If y is None then we are in test mode so just return scores
    if y is None:
        return scores

    loss, grads = 0, {}


    loss, dscores = softmax_loss(scores, y)
    loss = loss + 0.5*self.reg*(np.sum(W2**2) + np.sum(W1**2))
    dh1, grads['W2'], grads['b2'] = affine_backward(dscores,h2_cache)
    dX, grads['W1'], grads['b1'] = affine_relu_backward(dh1,h1_cache)
    grads['W1'] += self.reg*W1
    grads['W2'] += self.reg*W2

    return loss, grads
Run Code Online (Sandbox Code Playgroud)

和函数 eval_numeric_gradient:

def eval_numerical_gradient(f, x, verbose=True, h=0.00001):

    fx = f(x) # evaluate function value at original point
    grad = np.zeros_like(x)
    # iterate over all indexes in x
    it = np.nditer(x, flags=['multi_index'], op_flags=['readwrite'])
    while not it.finished:

        # evaluate function at x+h
        ix = it.multi_index
        oldval = x[ix]
        x[ix] = oldval + h # increment by h
        fxph = f(x) # evalute f(x + h)
        x[ix] = oldval - h
        fxmh = f(x) # evaluate f(x - h)
        x[ix] = oldval # restore

        # compute the partial derivative with centered formula
        grad[ix] = (fxph - fxmh) / (2 * h) # the slope
        if verbose:
            print(ix, grad[ix])
        it.iternext() # step to next dimension

    return grad
Run Code Online (Sandbox Code Playgroud)

损失函数并不复杂,我想知道“_”代表什么以及其中的功能。

Dee*_*ace 5

这是 Python 中的一个约定,用于_以后不会使用的变量。不涉及黑魔法,它是一个普通的变量名称,其行为与您所期望的完全一样。

在本例中使用它是因为它f作为回调传递,在调用它时将传递一个参数 ( fxph = f(x))。

如果f实施为

f = lambda: model.loss(X, y)[0]
Run Code Online (Sandbox Code Playgroud)

那么TypeError: <lambda>() takes 0 positional arguments but 1 was given就会出现错误。