numpy.float128在内部映射到什么精度?是__float128还是长双?(或完全不同的东西!?)
如果有人知道,可能会有一个问题:在C中将__float128强制转换为(16字节)长的双精度是否安全?(这是为了与长双打的C lib接口).
编辑:响应评论,该平台是'Linux-3.0.0-14-generic-x86_64-with-Ubuntu-11.10-oneiric'.现在,如果numpy.float128具有依赖于平台的不同精度,那对我来说也是有用的知识!
需要明确的是,这是我感兴趣的精度,而不是元素的大小.
我想用以下数据拟合曲线,但出现错误:
ipython-input>:2: RuntimeWarning: overflow encountered in exp
Run Code Online (Sandbox Code Playgroud)
有谁知道这个问题的原因是什么?我用 Matlab 的不同数据类型拟合了这条曲线,效果很好。我使用了 Matlab 代码中的初始条件。两条曲线相同,但在这种情况下 y 轴的值要高得多。
import numpy as np
import scipy.optimize
#sympy.init_printing()
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
list_R1_fit = [
19.53218114920747, 42.52167990454083, 60.95540646861309,
70.10646960395906, 73.99897337091254, 75.36736639556727,
75.69578522881915, 75.62147077733012, 75.42605227692485,
75.21657113589387, 75.04519265636262, 74.94144261816007,
74.92153132015117, 74.99475606015201, 75.15746897265564
]
tau_list = [
0.052, 0.12, 0.252,
0.464, 0.792, 1.264,
1.928, 2.824, 4,
5.600, 7.795, 10.806,
14.928, 20.599, 28.000
]
array_R1_fit = np.asarray(list_R1_fit)
tau_array = np.asarray(tau_list)
plt.plot(tau_array, array_R1_fit, 'o')
def func_R1_fit( t, a0, …Run Code Online (Sandbox Code Playgroud) 我有一个总值为的文件4950:
0.012345678912345678
Run Code Online (Sandbox Code Playgroud)
我使用以下方式读取文件:
a = numpy.genfromtxt(file_name, dtype=str, delimiter=',') # a.shape = (4950L, 1L) #dtype=str as I don't want to compromise accuracy
#say a == ['0.000000000000000001', -'0.000000000000000002', ...., '0.000000000004950']
Run Code Online (Sandbox Code Playgroud)
我想要实现的是获得一个b大小(100L, 100L)的矩阵:
示例(准确性很重要):
array = ['1','2','-3','-5','6','-7'] # In reality the data is up to 18 decimal places.
final_matrix = [
['0','1','2','-3'],
['-1',0,'-5','6'],
['-2','5','0','-7'],
['3','-6','7','0']
]
Run Code Online (Sandbox Code Playgroud)
实现这一目标的最有效方法是什么?