我需要处理来自服务器的几页数据。我想为它制作一个这样的发电机。不幸的是我得到TypeError: 'async_generator' object is not iterable
async def get_data():
i = 0
while i < 3:
i += 1
data = await http_call() # call to http-server here
yield data
data = [i for i in get_data()] # inside a loop
Run Code Online (Sandbox Code Playgroud)
下一个变体加注 TypeError: object async_generator can't be used in 'await' expression
data = [i for i in await get_data()] # inside a loop
Run Code Online (Sandbox Code Playgroud) 我需要进行探索性因素分析,并使用Python(假设只有1个基础因素)为每个观察值计算分数。看来这sklearn.decomposition.FactorAnalysis()是要走的路,但是不幸的是文档和示例(不幸的是我无法找到其他示例)还不足以使我确定如何完成工作。
我有以下测试文件,其中包含对29个变量(test.csv)的41个观察结果:
49.6,34917,24325.4,305,101350,98678,254.8,276.9,47.5,1,3,5.6,3.59,11.9,0,97.5,97.6,8,10,100,0,0,96.93,610.1,100,1718.22,6.7,28,5
275.8,14667,11114.4,775,75002,74677,30,109,9.1,1,0,6.5,3.01,8.2,1,97.5,97.6,8,8,100,0,0,100,1558,100,2063.17,5.5,64,5
2.3,9372.5,8035.4,4.6,8111,8200,8.01,130,1.2,0,5,0,3.33,6.09,1,97.9,97.9,8,8,67.3,342.3,0,99.96,18.3,53,1457.27,4.8,8,4
7.10,13198.0,13266.4,1.1,708,695,6.1,80,0.4,0,4,0,3.1,8.2,1,97.8,97.9,8,8,45,82.7,0,99.68,4.5,80,1718.22,13.8,0,3
1.97,2466.7,2900.6,19.7,5358,5335,10.1,23,0.5,0,2,0,3.14,8.2,0,97.3,97.2,9,9,74.5,98.2,0,99.64,79.8,54,1367.89,6.4,12,4
2.40,2999.4,2218.2,0.80,2045,2100,8.9,10,1.5,1,3,0,2.82,8.6,0,97.4,97.2,8,8,47.2,323.8,0,99.996,13.6,24,1249.67,2.7,12,3
0.59,4120.8,5314.5,0.54,14680,13688,14.9,117,1.1,0,3,0,2.94,3.4,0,97.6,97.7,8,8,11.8,872.6,0,100,9.3,52,1251.67,14,14,2
0.72,2067.7,2364,3,367,298,7.2,60,2.5,0,12,0,2.97,10.5,0,97.5,97.6,8,8,74.7,186.8,0,99.13,12,57,1800.45,2.7,4,2
1.14,2751.9,3066.8,3.5,1429,1498,7.7,9,1.6,0,3,0,2.86,7.7,0,97.6,97.8,8,9,76.7,240.1,0,99.93,13.6,60,1259.97,15,8,3
1.29,4802.6,5026.1,2.7,7859,7789,6.5,45,1.9,0,3,0,2.5,8.2,0,98,98,8,8,34,297.5,0,99.95,10,30,1306.44,8.5,0,4
0.40,639.0,660.3,1.3,23,25,1.5,9,0.1,0,1,0,2.5,8.2,0,97.7,97.8,8,8,94.2,0,0,100,4.3,50,1565.44,19.2,0,4
0.26,430.7,608.1,2,33,28,2.5,7,0.4,0,6,0,2.5,8.2,0,97.4,97.4,8,8,76.5,0,0,98.31,8,54,1490.08,0,0,4
4.99,2141.2,2357.6,3.60,339,320,8.1,7,0.2,0,8,0,2.5,5.9,0,97.3,97.4,8,8,58.1,206.3,0,99.58,13.2,95,1122.92,14.2,8,2
0.36,1453.7,1362.2,3.50,796,785,3.7,9,0.1,0,9,0,2.5,13.6,0,98,98.1,8,8,91.4,214.6,0,99.74,7.5,53,1751.98,11.5,0,2
0.36,1657.5,2421.1,2.8,722,690,8.1,8,0.4,0,1,0,2.5,8.2,0,97.2,97.3,11,12,37.4,404.2,0,99.98,10.9,35,1772.33,10.2,8,3
1.14,5635.2,5649.6,3,2681,2530,5.4,20,0.3,0,1,0,3.1,8.2,0,97.7,97.8,8,11,50.1,384.7,0,99.02,11.6,27,1306.08,16,0,2
0.6,1055.9,1487.9,1.3,69,65,2.5,6,0.4,0,8,0,2.5,8.2,0,97.9,97.7,8,11,63,137.9,0,99.98,5.1,48,1595.06,0,0,4
0.08,795.3,1174.7,1.40,85,76,2.2,7,0.2,0,0,0,2.5,8.2,0,97.4,97.5,8,8,39.3,149.3,0,98.27,5.1,52,1903.9,8.1,0,2
0.90,2514.0,2644.4,2.6,1173,1104,5.5,43,0.8,0,10,0,2.5,13.6,0,97.5,97.5,8,10,58.7,170.5,0,80.29,10,34,1292.72,4,0,2
0.27,870.4,949.7,1.8,252,240,2.2,31,0.2,0,1,0,2.5,8.2,0,97.5,97.6,8,8,64.5,0,0,100,6.6,29,1483.18,9.1,0,3
0.41,1295.1,2052.3,2.60,2248,2135,6.0,12,0.8,0,4,0,2.7,8.2,0,97.7,97.7,8,8,71.1,261.3,0,91.86,4.6,21,1221.71,9.4,0,4
1.10,3544.2,4268.9,2.1,735,730,6.6,10,1.7,0,14,0,2.5,8.2,0,97.7,97.8,8,8,52,317.2,0,99.62,9.8,46,1271.63,14.2,0,3
0.22,899.3,888.2,1.80,220,218,3.6,7,0.5,0,1,0,2.5,8.2,0,97.2,97.5,8,8,22.5,0,0,70.79,10.6,32,1508.02,0,0,4
0.24,1712.8,1735.5,1.30,41,35,5.4,7,0.5,0,1,0,3.28,8.2,0,97.8,97.8,9,10,16.6,720.2,0,99.98,4.3,53,1324.46,0,4,2
0.2,558.4,631.9,1.7,65,64,2.5,7,0.2,0,5,0,2.5,8.2,0,97.7,97.5,8,8,60.7,0,0,99.38,6.1,52,1535.08,0,0,2
0.21,599.9,1029,1.1,69,70,3.7,85.7,0.1,0,12,0,2.5,8.2,0,97.4,97.5,8,8,48.6,221.2,0,100,5.4,40,1381.44,25.6,0,2
0.10,131.3,190.6,1.6,28,25,2.9,7,0.3,0,3,0,2.5,8.2,0,97.7,97.8,8,8,58.9,189.4,0,99.93,6.9,42,1525.58,17.4,0,3
0.44,3881.4,5067.3,0.9,2732,2500,11.2,10,1.5,0,5,0,2.67,8.2,0,97.4,97.3,8,11,14.5,1326.2,0,99.06,3.7,31,1120.54,10.3,10,2
0.18,1024.8,1651.3,1.01,358,345,4.6,35,0.3,0,2,0,2.5,8.2,0,97.8,97.9,8,10,15.9,790.2,0,100,4.3,48,1531.04,10.5,0,3
0.46,682.9,784.2,1.8,103,109,2.2,8,0.4,0,4,0,2.5,8.2,0,97.8,97.9,8,8,82.7,166.3,0,99.96,6.4,44,1373.6,13.5,0,2
0.12,370.4,420.0,1.10,28,25,3.4,10,0.1,0,6,0,2.57,8.2,0,97.6,97.8,8,11,51.6,120,0,99.85,8.1,40,1297.94,0,0,3
0.03,552.4,555.1,0.8,54,49,3.5,10,0.4,0,0,0,2.5,8.2,0,97.4,97.6,8,10,33.6,594.5,0,100,3.2,41,1184.34,6.6,0,3
0.21,1256.5,2434.8,0.9,1265,1138,6.3,20,1.3,0,2,0,2.6,8.2,0,98,97.9,8,9,20.1,881,0,99.1,3.9,31,1265.93,7.8,0,3
0.09,320.6,745.7,1.10,37,25,2.7,8,0.3,0,9,0,2.5,8.2,0,98,97.8,8,8,49.2,376.4,0,99.95,4.3,39,1285.11,0,0,3
0.08,452.7,570.9,1,18,20,4.7,9,0.6,0,2,0,2.45,8.2,0,97.1,97.1,8,8,19.9,1103.8,0,99.996,2.9,22,1562.61,21.9,0,3
0.13,967.9,947.2,1,74,65,4.0,25,1.4,0,6,0,2.5,8.2,0,98,98,9,11,30.1,503.1,0,99.999,3.4,55,1269.33,0,0,2
0.07,495.0,570.3,1.2,27,30,4.3,7,0.5,0,12,0,3.62,8.2,0,98.2,98.2,15,13,29.8,430.5,0,99.7,4.9,40,1461.79,14.6,0,2
0.17,681.9,537.4,1.1,113,120,2.9,12,0.4,0,8,0,2.5,8.2,0,98.2,98.3,8,8,24,74.3,0,100,5,43,1290.16,0,0,3
0.05,639.7,898.2,0.40,9,12,3.0,7,0.1,0,1,0,2.5,8.2,0,97.6,97.8,15,11,11.9,1221.1,0,99.996,1.7,40,1372,7,0,4
0.65,2067.8,2084.2,2.50,414,398,7.3,6,0.7,0,4,0,2.16,8.2,0,97.8,97.9,12,12,60.1,146.3,0,99.96,10.4,44,1059.68,7.4,0,2
0.12,804.4,1416.4,3.30,579,602,4.2,7,1.8,0,1,0,2.5,8.2,0,98.1,98.3,8,10,8.9,2492.3,0,95.4,2.2,34,1345.76,7,0,2
Run Code Online (Sandbox Code Playgroud)
使用我根据官方示例编写的代码,从这篇文章中 我得到了奇怪的结果。码:
from sklearn import decomposition, preprocessing
from sklearn.cross_validation import cross_val_score
import csv
import numpy as np
data = np.genfromtxt('test.csv', delimiter=',')
def compute_scores(X):
n_components = np.arange(0, len(X), 1) …Run Code Online (Sandbox Code Playgroud) 我对以下情节的图例有几个问题:

似乎我需要使用scale_manual和guide_legend选项,但我所有的尝试都惨遭失败。
这是创建绘图的代码。plotDumping是绘制绘图、为绘图updateData生成数据框和 'updateLabels' 为绘图生成脚注的函数。
library(ggplot2)
library(grid)
library(gridExtra)
library(scales)
max_waste_volume <- 2000
Illegal_dumping_fine_P <- 300000
Illigal_landfilling_fine_P1 <- 500000
Fine_probability_k <- 0.5
Official_tax_Ta <- 600
# mwv = max_waste_volume
# P = Illegal_dumping_fine_P
# P1 = Illigal_landfilling_fine_P1
# k = Fine_probability_k
# Ta = Official_tax_Ta
updateData <- function(mwv, k, P1, P, Ta){
# creates and(or) updates global data frame to provide data for the plot
new_data <<- NULL
new_data …Run Code Online (Sandbox Code Playgroud) 对于我的应用程序,我需要设置一些小部件参数,例如alignment(Qt::AlignBottom)和其他。但是我无法导入它们(其他PyQt5素材导入没有任何问题)。
使用此代码
from PyQt5 import Qt
progressBar = QProgressBar(splash)
progressBar.setAlignment(Qt.AlignBottom)
Run Code Online (Sandbox Code Playgroud)
我收到以下错误:
Traceback (most recent call last):
File "run_app.py", line 50, in <module>
runSemApp(sys.argv)
File "run_app.py", line 32, in runSemApp
progressBar.setAlignment(Qt.AlignBottom)
AttributeError: 'module' object has no attribute 'AlignBottom'
Run Code Online (Sandbox Code Playgroud)
并使用此作品:
from PyQt5.Qt import *
progressBar = QProgressBar(splash)
progressBar.setAlignment(Qt.AlignBottom)
Run Code Online (Sandbox Code Playgroud)
尽管我有一个可行的解决方案,但我只想导入Qt.AlignBottom而不能*。另外,为什么不Qt.AlignBottom一起工作from PyQt5 import Qt?
学习 Rust 我碰巧需要比较嵌套枚举内的变体。考虑到以下枚举,我如何比较实例化的实际变体BuffTurget?
enum Attribute {
Strength,
Agility,
Intellect,
}
enum Parameter {
Health,
Mana,
}
enum BuffTarget {
Attribute(Attribute),
Parameter(Parameter),
}
Run Code Online (Sandbox Code Playgroud)
在网上搜索后,我找到了“判别式”,特别是像这样的比较函数:
fn variant_eq<T>(a: &T, b: &T) -> bool {
std::mem::discriminant(a) == std::mem::discriminant(b)
}
Run Code Online (Sandbox Code Playgroud)
不幸的是,这个功能在我的情况下不起作用:
#[test]
fn variant_check_is_working() {
let str = BuffTarget::Attribute(Attribute::Strength);
let int = BuffTarget::Attribute(Attribute::Intellect);
assert_eq!(variant_eq(&str, &int), false);
}
// Output:
// thread 'tests::variant_check' panicked at 'assertion failed: `(left == right)`
// left: `true`,
// right: `false`', src/lib.rs:11:9
Run Code Online (Sandbox Code Playgroud)
理想情况下,我希望我的代码是这样的,使用if let:
let …Run Code Online (Sandbox Code Playgroud) 我正在创建一个数据迁移,并且new_app可以将其回滚。
# This is `new_app` migration
class Migration(migrations.Migration):
dependencies = [
]
operations = [
migrations.RunPython(import_data, reverse_code=delete_data)
]
Run Code Online (Sandbox Code Playgroud)
此迁移将一些数据添加到其他应用程序中定义的模型中:my_other_app。要导入我想要更新或删除记录的模型,我使用apps.get_model()方法。
# This is `new_app` migration
def import_data(apps, schema_editor):
model = apps.get_model('my_other_app', 'MyModel')
Run Code Online (Sandbox Code Playgroud)
当我应用迁移时,它就像魅力一样。但是当我运行尝试回滚迁移时,出现:~> manage.py migrate new_app zero异常:LookupError: No installed app with label 'my_other_app'.回滚代码中的模型导入:
# This is `new_app` migration
def delete_data(apps, schema_editor):
schema_model = apps.get_model('my_other_app', 'MyModel')
Run Code Online (Sandbox Code Playgroud)
模型导入的代码是一样的,但是为什么迁移回滚时不起作用?现在我import在回滚期间有一个使用直接模型的解决方法。不知道会不会给以后带来麻烦。
我在找出目标结构的方法的单元测试时遇到了麻烦。
我有一个方法random_number根据结构的属性返回一个随机值,还有另一个方法plus_one可以获取第一个方法的结果并对其进行处理:
pub struct RngTest {
pub attr: u64,
}
impl RngTest {
pub fn random_number(&self) -> u64 {
let random = 42; // lets pretend it is random
return random * self.attr;
}
pub fn plus_one(&self) -> u64 {
return self.random_number() + 1;
}
}
Run Code Online (Sandbox Code Playgroud)
对第一种方法进行单元测试,测试另一种方法的策略是什么?我想模拟self.random_number()单元测试的输出,以便在单元测试中plus_one()拥有健全的代码。有一篇很好的文章比较了不同的模拟库,并得出结论(很遗憾)它们中没有一个能真正从其他库中脱颖而出。
我在阅读这些库的说明时学到的唯一一件事是,我可以模拟方法的唯一方法是将它们移动到 trait。我在这些库中没有看到任何示例(我查看了其中的 4 或 5 个),它们测试了与此类似的案例。
在将这些方法移动到一个特征(即使它们是)之后,我如何模拟random_number对 的输出进行单元测试RngTest::plus_one?
pub trait SomeRng {
fn random_number(&self) -> u64 {
let random …Run Code Online (Sandbox Code Playgroud) 我有SQLite数据库,其中包含包含日期的表.我想要选择落在特定范围内的记录,但是我无法编写正确的查询.
# This query returns nothing
rows = model.select().where(
(model.date.between(start_date, end_date)) &
(model.name == point_name)
).tuples()
# This query returns nothing too
rows = model.select().where(
(model.date > start_date) &
(model.date < end_date) &
(model.name == point_name)
).tuples()
# However tis one works:
rows = model.select().where(
(model.date > start_date) &
(model.name == point_name)
).tuples()
Run Code Online (Sandbox Code Playgroud)
当我尝试查询日期范围时,为什么我的代码在查询日期比在给定时更早或更小时工作?
我的 Rocket 应用程序有以下工作数据库连接设置:
主要.rs:
#[database("my_db")]
pub struct DbConn(diesel::PgConnection);
Run Code Online (Sandbox Code Playgroud)
火箭.toml:
[global.databases]
my_db = { url = "postgres://user:pass@localhost/my_db" }
Run Code Online (Sandbox Code Playgroud)
我想从环境中设置用户名、密码和数据库名称。预计会是这样的ROCKET_MY_DB=postgres://user:pass@localhost/my_db,但没有成功。无法找到 Rocket 的相关数据库示例。
我需要在生产环境中为基于 Rocket 的应用程序运行 Diesel 数据库迁移。通常有几种方法可以为数据库执行迁移:
我更喜欢使用--migrate应用程序二进制文件的标志调用的第二个选项,但由于目标应用程序相当简单,第一种方法就可以了。
Diesel 问题跟踪器中有一个关于在生产中运行迁移的线程,并提供有关如何执行此操作的建议:
- 添加
diesel_migrations到您的依赖项- 包括
extern crate diesel_migrations在你的箱子,并确保与装饰它#[macro_use]- 在代码的开头,添加
embed_migrations!()- 要运行迁移,请使用
embedded_migrations::run(&db_conn)
在main.rs我做了:
#![feature(proc_macro_hygiene, decl_macro)]
#[macro_use]
extern crate diesel;
#[macro_use]
extern crate diesel_migrations;
#[macro_use]
extern crate rocket;
#[macro_use]
extern crate rocket_contrib;
#[database("my_db_name")]
pub struct DbConn(diesel::PgConnection);
fn main() {
// Update database
embed_migrations!();
embedded_migrations::run(&DbConn);
// Launch the app
...
}
Run Code Online (Sandbox Code Playgroud)
这导致错误:
#![feature(proc_macro_hygiene, decl_macro)]
#[macro_use]
extern crate diesel;
#[macro_use]
extern crate diesel_migrations;
#[macro_use]
extern …Run Code Online (Sandbox Code Playgroud) python ×4
python-3.x ×4
rust ×4
database ×2
r ×2
rust-rocket ×2
async-await ×1
comparison ×1
django ×1
enums ×1
generator ×1
ggplot2 ×1
mocking ×1
peewee ×1
plot ×1
pyqt5 ×1
qt ×1
rust-diesel ×1
scikit-learn ×1
settings ×1
sql ×1
sqlite ×1
testing ×1
unit-testing ×1