我习惯在Django和gunicorn上开发Web应用程序.
对于Django,Django应用程序中的任何应用程序模块都可以通过django.conf.settings获取部署设置."settings.py"是用Python编写的,因此可以动态定义任意设置和预处理.
在gunicorn的情况下,它按优先顺序有三个配置位置,并且一个设置注册表类实例组合这些.(但通常这些设置仅用于gunicorn而不是应用程序.)
对于Pyramid,根据Pyramid文档,部署设置通常可以放入pyramid.registry.Registry().settings中.但它似乎仅在存在pyramid.router.Router()实例时才被访问.那就是pyramid.threadlocal.get_current_registry().settings在应用程序"main.py"的启动过程中返回None.
例如,我通常在SQLAlchemy模型模块中定义一些业务逻辑,这需要部署设置如下.
myapp/models.py
from sqlalchemy import Table, Column, Types
from sqlalchemy.orm import mapper
from pyramid.threadlocal import get_current_registry
from myapp.db import session, metadata
settings = get_current_registry().settings
mytable = Table('mytable', metadata,
Column('id', Types.INTEGER, primary_key=True,)
(other columns)...
)
class MyModel(object):
query = session.query_property()
external_api_endpoint = settings['external_api_uri']
timezone = settings['timezone']
def get_api_result(self):
(interact with external api ...)
mapper(MyModel, mytable)
Run Code Online (Sandbox Code Playgroud)
但是,"settings ['external_api_endpoint']"会引发TypeError异常,因为"settings"为None.
我想了两个解决方案.
定义一个callable,它接受"models.py"中的"config"参数,"main.py"使用Configurator()实例调用它.
myapp/models.py
from sqlalchemy import Table, …Run Code Online (Sandbox Code Playgroud)我想将ed25519私钥(由ssh-keygen命令生成)转换为ppk文件.但我得到了错误.
无法加载私钥(无法识别的密码名称)
有人能帮我吗?
测试openssh版本:OpenSSH_7.6p1, OpenSSL 1.1.0g 2 Nov 2017和OpenSSH_7.6p1, OpenSSL 1.0.2n 7 Dec 2017(在CoreOS和ArchLinux docker容器上)
测试腻子版本:0.70 64bit,0.70 32bit和snapshot(在Windows 10)
我的程序如下.
# ssh-keygen -t ed25519 -a 100
Generating public/private ed25519 key pair.
Enter file in which to save the key (/root/.ssh/id_ed25519):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_ed25519.
Your public key has been saved in /root/.ssh/id_ed25519.pub.
The key …Run Code Online (Sandbox Code Playgroud) 我正在尝试ALS通过TrainValidationSplit.
它运作良好,但我想知道哪种超参数组合是最好的。评估后如何获得最佳参数?
from pyspark.ml.recommendation import ALS
from pyspark.ml.tuning import TrainValidationSplit, ParamGridBuilder
from pyspark.ml.evaluation import RegressionEvaluator
df = sqlCtx.createDataFrame(
[(0, 0, 4.0), (0, 1, 2.0), (1, 1, 3.0), (1, 2, 4.0), (2, 1, 1.0), (2, 2, 5.0)],
["user", "item", "rating"],
)
df_test = sqlCtx.createDataFrame(
[(0, 0), (0, 1), (1, 1), (1, 2), (2, 1), (2, 2)],
["user", "item"],
)
als = ALS()
param_grid = ParamGridBuilder().addGrid(
als.rank,
[10, 15],
).addGrid(
als.maxIter,
[10, 15],
).build()
evaluator = RegressionEvaluator(
metricName="rmse", …Run Code Online (Sandbox Code Playgroud) 在JanusGraph中,我想获得一些Date属性的min().
由于min()和max()都只支持Numbertype,我使用map{it.get().getTime()}.但奇怪的结果.
怎么做 ?
gremlin> mgmt = graph.openManagement()
==>org.janusgraph.graphdb.database.management.ManagementSystem@496a8a94
gremlin> person = mgmt.makeVertexLabel('Person').make()
==>Person
gremlin> t_created = mgmt.makePropertyKey('t_created').dataType(Date.class).cardinality(SINGLE).make()
==>t_created
gremlin> t_modified = mgmt.makePropertyKey('t_modified').dataType(Date.class).cardinality(SINGLE).make()
==>t_modified
gremlin> mgmt.buildIndex('i_t_created', Vertex.class).addKey(t_created).buildMixedIndex('search')
==>i_t_created
gremlin> mgmt.buildIndex('i_t_modified', Vertex.class).addKey(t_modified).buildMixedIndex('search')
==>i_t_modified
gremlin> mgmt.commit()
==>null
Run Code Online (Sandbox Code Playgroud)
gremlin> person = g.addV('Person').property('t_created', new Date()).property('t_modified', new Date()).next()
==>v[16488]
gremlin> g.tx().commit()
==>null
gremlin> g.V(person).properties()
==>vp[t_created->Tue Sep 05 07:40:16 ]
==>vp[t_modified->Tue Sep 05 07:40:16 ]
gremlin> g.V(person).values('t_created', 't_modified')
==>Tue Sep 05 07:40:16 UTC 2017
==>Tue …Run Code Online (Sandbox Code Playgroud) apache-spark ×1
ed25519 ×1
gremlin ×1
janusgraph ×1
openssh ×1
paste ×1
putty ×1
pyramid ×1
pyspark ×1
python ×1
tinkerpop ×1
tinkerpop3 ×1