我在Python3中编写了一个基本脚本来计算Collatz猜想.它采用正整数作为输入,并返回步骤的数字,直到序列下降到1.
我的脚本适用于任何小于2万亿的整数输入,但高于此阈值时输出太小.
举个例子,这里有一些输入,我的脚本的输出和实际的正确输出:
Integer Input Script Output Correct Output
989,345,275,647 1,348 1,348
1,122,382,791,663 1,356 1,356
1,444,338,092,271 1,408 1,408
1,899,148,184,679 1,411 1,411
2,081,751,768,559 385 1,437
2,775,669,024,745 388 1,440
3,700,892,032,993 391 1,443
3,743,559,068,799 497 1,549 `
Run Code Online (Sandbox Code Playgroud)
正确的输出值基于以下链接:http://www.ericr.nl/wondrous/delrecs.html
对于2万亿以上的输入,我的脚本输出总是比正确的输出少1,052,但我不知道是什么导致了这个.
谁能解释什么是错的,以及如何更新/修复脚本以使其适用于所有输入?我认为Python能够毫无问题地接受任意大数字...
谢谢!
# Python Code for the Collatz Conjecture
# Rules: Take any integer 'n' and assess:
# If integer is even, divide by 2 (n/2)
# If integer is odd, multiply by 3 and add 1 (3n+1)
# …
Run Code Online (Sandbox Code Playgroud) 我正在尝试使用rmsle作为评估指标在 Python 中训练 lightgbm ML 模型,但当我尝试包含提前停止时遇到问题。
这是我的代码:
import numpy as np
import pandas as pd
import lightgbm as lgb
from sklearn.model_selection import train_test_split
df_train = pd.read_csv('train_data.csv')
X_train = df_train.drop('target', axis=1)
y_train = np.log(df_train['target'])
sample_params = {
'boosting_type': 'gbdt',
'objective': 'regression',
'random_state': 42,
'metric': 'rmsle',
'lambda_l1': 5,
'lambda_l2': 5,
'num_leaves': 5,
'bagging_freq': 5,
'max_depth': 5,
'max_bin': 5,
'min_child_samples': 5,
'feature_fraction': 0.5,
'bagging_fraction': 0.5,
'learning_rate': 0.1,
}
X_train_tr, X_train_val, y_train_tr, y_train_val = train_test_split(X_train, y_train, test_size=0.2, random_state=42)
def train_lightgbm(X_train_tr, y_train_tr, X_train_val, y_train_val, …
Run Code Online (Sandbox Code Playgroud) python machine-learning non-linear-regression lightgbm early-stopping