使用python计算矢量场的分歧

nyv*_*tak 10 python numpy scipy

是否有一个函数可用于计算矢量场的发散?(在matlab中)我希望它存在于numpy/scipy中,但我无法使用Google找到它.

我需要计算div[A * grad(F)],在哪里

F = np.array([[1,2,3,4],[5,6,7,8]]) # (2D numpy ndarray)

A = np.array([[1,2,3,4],[1,2,3,4]]) # (2D numpy ndarray)
Run Code Online (Sandbox Code Playgroud)

所以grad(F)是2D的列表ndarray小号

我知道我可以像这样计算分歧,但不想重新发明轮子.(我也希望有更优化的东西)有人有建议吗?

小智 12

只是暗示每个人都在阅读:

上述函数不计算向量场的偏差.他们总结了标量场的衍生物A:

结果= dA/dx + dA/dy

与矢量场(具有三维示例)形成对比:

result = sum dAi/dxi = dAx/dx + dAy/dy + dAz/dz

为所有人投票!这在数学上是完全错误的.

干杯!

  • 有点奇怪,这是在页面底部.其他答案在数学上是不正确的 (4认同)

小智 11

import numpy as np

def divergence(field):
    "return the divergence of a n-D field"
    return np.sum(np.gradient(field),axis=0)
Run Code Online (Sandbox Code Playgroud)


Juh*_*uh_ 10

@ user2818943的答案很好,但可以稍微优化一下:

def divergence(F):
    """ compute the divergence of n-D scalar field `F` """
    return reduce(np.add,np.gradient(F))
Run Code Online (Sandbox Code Playgroud)

Timeit:

F = np.random.rand(100,100)
timeit reduce(np.add,np.gradient(F))
# 1000 loops, best of 3: 318 us per loop

timeit np.sum(np.gradient(F),axis=0)
# 100 loops, best of 3: 2.27 ms per loop
Run Code Online (Sandbox Code Playgroud)

大约快7倍: sum从返回的渐变字段列表中隐含地构造一个3d数组np.gradient.这是避免使用reduce


现在,在你的问题中,你的意思是 div[A * grad(F)]什么?

  1. about A * grad(F):A是一个2d数组,是2d数组grad(f)的列表.所以我认为这意味着将每个梯度场乘以A.
  2. 关于将分歧应用于(按比例缩放A)梯度场尚不清楚.根据定义,div(F) = d(F)/dx + d(F)/dy + ....我想这只是一个错误的表述.

因为1,将相加的元素乘以Bi相同的因子A可以被分解:

Sum(A*Bi) = A*Sum(Bi)
Run Code Online (Sandbox Code Playgroud)

因此,您可以通过以下方式获得此加权渐变: A*divergence(F)

如果A是不是对每个维度的因素,一个列表,那么解决办法是:

def weighted_divergence(W,F):
    """
    Return the divergence of n-D array `F` with gradient weighted by `W`

    ?`W` is a list of factors for each dimension of F: the gradient of `F` over
    the `i`th dimension is multiplied by `W[i]`. Each `W[i]` can be a scalar
    or an array with same (or broadcastable) shape as `F`.
    """
    wGrad = return map(np.multiply, W, np.gradient(F))
    return reduce(np.add,wGrad)

result = weighted_divergence(A,F)
Run Code Online (Sandbox Code Playgroud)

  • 不是矢量场的偏差F = d(Fx)/ dx + d(Fy)/ dy + ......?对于范围内的i(len(F)),正确的公式更像是`np.ufunc.reduce(np.add,[np.gradient(F [i],axis = i))` (3认同)

Dan*_*iel 8

基于Juh_的答案,但针对矢量场公式的正确散度进行了修改

def divergence(f):
    """
    Computes the divergence of the vector field f, corresponding to dFx/dx + dFy/dy + ...
    :param f: List of ndarrays, where every item of the list is one dimension of the vector field
    :return: Single ndarray of the same shape as each of the items in f, which corresponds to a scalar field
    """
    num_dims = len(f)
    return np.ufunc.reduce(np.add, [np.gradient(f[i], axis=i) for i in range(num_dims)])
Run Code Online (Sandbox Code Playgroud)

Matlab的文档使用此确切公式(向下滚动到矢量场的散度)


Pau*_*hen 6

大牛修改的就是正确的答案,让我更详细地解释一下自定义函数发散:

函数np.gradient()定义为:np.gradient(f)= df/dx、df/dy、df/dz +...

但我们需要将 func divergence 定义为:divergence ( f) = dfx/dx + dfy/dy + dfz/dz +... = np.gradient( fx) + np.gradient(fy)+ np.gradient(fz)+ ...

让我们测试一下,与matlab中的散度示例进行比较

import numpy as np
import matplotlib.pyplot as plt

NY = 50
ymin = -2.
ymax = 2.
dy = (ymax -ymin )/(NY-1.)

NX = NY
xmin = -2.
xmax = 2.
dx = (xmax -xmin)/(NX-1.)

def divergence(f):
    num_dims = len(f)
    return np.ufunc.reduce(np.add, [np.gradient(f[i], axis=i) for i in range(num_dims)])

y = np.array([ ymin + float(i)*dy for i in range(NY)])
x = np.array([ xmin + float(i)*dx for i in range(NX)])

x, y = np.meshgrid( x, y, indexing = 'ij', sparse = False)

Fx  = np.cos(x + 2*y)
Fy  = np.sin(x - 2*y)

F = [Fx, Fy]
g = divergence(F)

plt.pcolormesh(x, y, g)
plt.colorbar()
plt.savefig( 'Div' + str(NY) +'.png', format = 'png')
plt.show()
Run Code Online (Sandbox Code Playgroud)

在此输入图像描述

---------- 更新版本:包括差异步骤----------------

感谢@henry的评论,np.gradient默认步长为1,所以结果可能会有一些不匹配。我们可以提供我们自己的微分步骤。

#/sf/answers/3353350521/
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable

NY = 50
ymin = -2.
ymax = 2.
dy = (ymax -ymin )/(NY-1.)

NX = NY
xmin = -2.
xmax = 2.
dx = (xmax -xmin)/(NX-1.)


def divergence(f,h):
    """
    div(F) = dFx/dx + dFy/dy + ...
    g = np.gradient(Fx,dx, axis=1)+ np.gradient(Fy,dy, axis=0) #2D
    g = np.gradient(Fx,dx, axis=2)+ np.gradient(Fy,dy, axis=1) +np.gradient(Fz,dz,axis=0) #3D
    """
    num_dims = len(f)
    return np.ufunc.reduce(np.add, [np.gradient(f[i], h[i], axis=i) for i in range(num_dims)])

y = np.array([ ymin + float(i)*dy for i in range(NY)])
x = np.array([ xmin + float(i)*dx for i in range(NX)])

x, y = np.meshgrid( x, y, indexing = 'ij', sparse = False)

Fx  = np.cos(x + 2*y)
Fy  = np.sin(x - 2*y)

F = [Fx, Fy]
h = [dx, dy]



print('plotting')
rows = 1
cols = 2
#plt.clf()
plt.figure(figsize=(cols*3.5,rows*3.5))
plt.minorticks_on()


#g = np.gradient(Fx,dx, axis=1)+np.gradient(Fy,dy, axis=0) # equivalent to our func
g = divergence(F,h)
ax = plt.subplot(rows,cols,1,aspect='equal',title='div numerical')
#im=plt.pcolormesh(x, y, g)
im = plt.pcolormesh(x, y, g, shading='nearest', cmap=plt.cm.get_cmap('coolwarm'))
plt.quiver(x,y,Fx,Fy)
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
cbar = plt.colorbar(im, cax = cax,format='%.1f')


g = -np.sin(x+2*y) -2*np.cos(x-2*y)
ax = plt.subplot(rows,cols,2,aspect='equal',title='div analytical')
im=plt.pcolormesh(x, y, g)
im = plt.pcolormesh(x, y, g, shading='nearest', cmap=plt.cm.get_cmap('coolwarm'))
plt.quiver(x,y,Fx,Fy)
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
cbar = plt.colorbar(im, cax = cax,format='%.1f')


plt.tight_layout()
plt.savefig( 'divergence.png', format = 'png')
plt.show()
Run Code Online (Sandbox Code Playgroud)

数值比较分析结果