我想记录的event name
和parameter
我的节点服务器上的每个事件.为此我用过
io.use(function(socket, next){
// how to get event name out of socket.
});
Run Code Online (Sandbox Code Playgroud)
现在,我在试图获取事件名称和参数时遇到困难.对我来说,它似乎是API开发人员的共同需求,所以我很确定必须有一些方法来获取它,我试图阅读文档和源代码,但我无法得到这些东西.
我正在尝试cassandra节点驱动程序并在插入记录时陷入问题,它看起来像cassandra驱动程序无法插入浮点值.
Problem: When passing int value for insertion in db, api gives following error:
Debug: hapi, internal, implementation, error
ResponseError: Expected 4 or 0 byte int (8)
at FrameReader.readError (/home/gaurav/Gaurav-Drive/code/nodejsWorkspace/cassandraTest/node_modules/cassandra-driver/lib/readers.js:291:13)
at Parser.parseError (/home/gaurav/Gaurav-Drive/code/nodejsWorkspace/cassandraTest/node_modules/cassandra-driver/lib/streams.js:185:45)
at Parser.parseBody (/home/gaurav/Gaurav-Drive/code/nodejsWorkspace/cassandraTest/node_modules/cassandra-driver/lib/streams.js:167:19)
at Parser._transform (/home/gaurav/Gaurav-Drive/code/nodejsWorkspace/cassandraTest/node_modules/cassandra-driver/lib/streams.js:101:10)
at Parser.Transform._read (_stream_transform.js:179:10)
at Parser.Transform._write (_stream_transform.js:167:12)
at doWrite (_stream_writable.js:225:10)
at writeOrBuffer (_stream_writable.js:215:5)
at Parser.Writable.write (_stream_writable.js:182:11)
at write (_stream_readable.js:601:24)
Run Code Online (Sandbox Code Playgroud)
我试图从代码执行以下查询:
INSERT INTO ragchews.user
(uid ,iid ,jid ,jpass ,rateCount ,numOfratedUser ,hndl ,interests ,locX ,locY ,city )
VALUES
('uid_1',{'iid1'},'jid_1','pass_1',25, 10, {'NEX1231'}, {'MUSIC'}, 21.321, 43.235, …
Run Code Online (Sandbox Code Playgroud) 我需要在Excel工作表中的特定单元格上创建一个下拉列表(数据验证)并将其读回.
在提供的教程的帮助下Apache POI
,我能够在Excel工作表中创建一个下拉列表,但我还需要在再次阅读时阅读下拉列表内容,以便我可以渲染类似的下拉列表在UI上列出.
有什么建议?
我使用Spark 2.3.0。
作为Apache Spark的项目,我正在使用此数据集进行处理。当尝试使用spark读取csv时,spark数据帧中的行与csv(请参见此处的示例csv示例)文件中的正确行不对应。代码如下:
answer_df = sparkSession.read.csv('./stacksample/Answers_sample.csv', header=True, inferSchema=True, multiLine=True);
answer_df.show(2)
Run Code Online (Sandbox Code Playgroud)
输出量
+--------------------+-------------+--------------------+--------+-----+--------------------+
| Id| OwnerUserId| CreationDate|ParentId|Score| Body|
+--------------------+-------------+--------------------+--------+-----+--------------------+
| 92| 61|2008-08-01T14:45:37Z| 90| 13|"<p><a href=""htt...|
|<p>A very good re...| though.</p>"| null| null| null| null|
+--------------------+-------------+--------------------+--------+-----+--------------------+
only showing top 2 rows
Run Code Online (Sandbox Code Playgroud)
但是,当我使用熊猫时,它就像一种魅力。
df = pd.read_csv('./stacksample/Answers_sample.csv')
df.head(3)
Run Code Online (Sandbox Code Playgroud)
输出量
Index Id OwnerUserId CreationDate ParentId Score Body
0 92 61 2008-08-01T14:45:37Z 90 13 <p><a href="http://svnbook.red-bean.com/">Vers...
1 124 26 2008-08-01T16:09:47Z 80 12 <p>I wound up using this. It is a kind …
Run Code Online (Sandbox Code Playgroud) Google Cloud SDK安装过程在我的计算机(MAC)上失败,并让我跟踪堆栈跟踪.
Traceback (most recent call last):
File "/Users/ttn/Desktop/google-cloud-sdk/bin/bootstrapping/install.py", line 218, in <module>
main()
File "/Users/ttn/Desktop/google-cloud-sdk/bin/bootstrapping/install.py", line 203, in main
sdk_root=bootstrapping.SDK_ROOT,
File "/Users/ttn/Desktop/google-cloud-sdk/lib/googlecloudsdk/core/platforms_install.py", line 452, in UpdateRC
completion_update, path_update, rc_path, sdk_root, host_os).Update()
File "/Users/ttn/Desktop/google-cloud-sdk/lib/googlecloudsdk/core/platforms_install.py", line 214, in Update
self.path, rc_contents, source_line=self._GetSourceLine())
File "/Users/ttn/Desktop/google-cloud-sdk/lib/googlecloudsdk/core/platforms_install.py", line 167, in _GetRcContents
filtered_contents=filtered_contents, line=line)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 197: ordinal not in range(128)
Run Code Online (Sandbox Code Playgroud)
这里有更多细节:
系统的默认python版本
python -V
Python 3.6.1 :: Anaconda custom (x86_64)
Run Code Online (Sandbox Code Playgroud)
适用于Cloud SDK的Python版本.
echo $CLOUDSDK_PYTHON
/usr/bin/python2.7
Run Code Online (Sandbox Code Playgroud)
检查gcloud命令 …
我试图使用Gson库将Json字符串解析为Java对象,但我遇到了StackoverflowException.
java.lang.StackOverflowError
com.google.gson.internal.$Gson$Types.checkNotPrimitive($Gson$Types.java:431)
com.google.gson.internal.$Gson$Types.access$000($Gson$Types.java:42)
com.google.gson.internal.$Gson$Types$WildcardTypeImpl.($Gson$Types.java:540)
com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:108)
com.google.gson.internal.$Gson$Types$WildcardTypeImpl.($Gson$Types.java:549)
com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:108)
com.google.gson.internal.$Gson$Types$WildcardTypeImpl.($Gson$Types.java:542)
com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:108)
com.google.gson.internal.$Gson$Types$WildcardTypeImpl.($Gson$Types.java:549)
com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:108)
Run Code Online (Sandbox Code Playgroud)
Json字符串:
{"password":"ac@123","role":"normaluser","name":"Archana Chatterjee","username":"a.chatterjee","designation":"Teacher","id":"T_02","age":42}
Run Code Online (Sandbox Code Playgroud)
解析代码:
Entity entity = null;
entity = gson.fromJson(json, Staff.class);
Run Code Online (Sandbox Code Playgroud)
Java类:
public class Staff extends LoginEntity {
Logger logger = Logger.getRootLogger();
@SerializedName("name")
String name;
@SerializedName("designation")
String designation;
@SerializedName("role")
String role;
@SerializedName("age")
int age;
}
public abstract class LoginEntity extends Entity {
private static final Logger logger = Logger.getRootLogger();
@SerializedName("username")
String mailid;
@SerializedName("password")
String password;
}
Root class for all.
public abstract class Entity {
Logger logger …
Run Code Online (Sandbox Code Playgroud) 考虑到接口现在可以为它提供的方法提供实现,我无法正确理解接口和抽象类之间的差异.有谁知道如何正确解释差异?
我还被告知,与抽象类相比,接口稍微更轻,性能明智.有人能证实吗?
在尝试加载线性回归模型时,我遇到了下面提到的错误:
Traceback (most recent call last):
File "server.py", line 5, in <module>
linReg = Model()
File "/home/pyspark/Desktop/building_py_rec/lin_reg/ml_algo/model.py", line 23, in __init__
self.model = LinearRegression.load('model_lin_reg')
File "/home/pyspark/spark-2.1.0-bin-hadoop2.7/python/pyspark/ml/util.py", line 252, in load
return cls.read().load(path)
File "/home/pyspark/spark-2.1.0-bin-hadoop2.7/python/pyspark/ml/util.py", line 193, in load
java_obj = self._jread.load(path)
File "/home/pyspark/spark-2.1.0-bin-hadoop2.7/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
File "/home/pyspark/spark-2.1.0-bin-hadoop2.7/python/pyspark/sql/utils.py", line 63, in deco
return f(*a, **kw)
File "/home/pyspark/spark-2.1.0-bin-hadoop2.7/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o24.load.
: java.lang.NoSuchMethodException: org.apache.spark.ml.regression.LinearRegressionModel.<init>(java.lang.String)
at java.lang.Class.getConstructor0(Class.java:3082)
at java.lang.Class.getConstructor(Class.java:1825)
at org.apache.spark.ml.util.DefaultParamsReader.load(ReadWrite.scala:325)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native …
Run Code Online (Sandbox Code Playgroud) 我刚刚开始使用 Keras 并且正在做一些图像预处理,我观察到从接收到的生成器ImageDataGenerator
在for-loop
.
image_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1/255, rotation_range=45)
train_data_gen = image_gen.flow_from_directory(train_dir,
shuffle=True,
target_size=(IMG_SHAPE, IMG_SHAPE),
batch_size=batch_size
)
print('Total number of batches - {}'.format(len(train_data_gen)))
for n, i in enumerate(train_data_gen):
if n >= 30:
# I have to add explicit break statement to get out of loop when done with iterating over all the items present in generator.
break
batch_data = i[0]
print(n, batch_data[0].shape)
# TRY to access element out of bound to see if there really exists more than …
Run Code Online (Sandbox Code Playgroud) 我已经解决了这里发布的许多问题并参考了 POI 文档,但我无法解决这个问题。
问题:尝试重新计算公式时,出现异常。
公式:
=CONCATENATE("#DFLT=",COUNTIF(C5:C390,"=DEFAULTERS"),"; #NP=",COUNTIF(C5:C390,"=NOT PAID"),"; #PCsh=",COUNTIF(C5:C390,"=Paid Cash"),"; #PChk=",COUNTIF(C5:C390,"=Paid Cheque"),"; #PNeft=",COUNTIF(C5:C390,"=Paid Neft"))
Run Code Online (Sandbox Code Playgroud)
例外:
10-22 17:13:15.177: E/AndroidRuntime(26300): FATAL EXCEPTION: main
10-22 17:13:15.177: E/AndroidRuntime(26300): java.lang.IllegalArgumentException: Unexpected eval class (org.apache.poi.ss.formula.eval.MissingArgEval)
10-22 17:13:15.177: E/AndroidRuntime(26300): at org.apache.poi.ss.formula.eval.OperandResolver.coerceValueToString(OperandResolver.java:275)
10-22 17:13:15.177: E/AndroidRuntime(26300): at org.apache.poi.ss.formula.functions.TextFunction.evaluateStringArg(TextFunction.java:40)
10-22 17:13:15.177: E/AndroidRuntime(26300): at org.apache.poi.ss.formula.functions.TextFunction$8.evaluate(TextFunction.java:249)
10-22 17:13:15.177: E/AndroidRuntime(26300): at org.apache.poi.ss.formula.OperationEvaluatorFactory.evaluate(OperationEvaluatorFactory.java:132)
10-22 17:13:15.177: E/AndroidRuntime(26300): at org.apache.poi.ss.formula.WorkbookEvaluator.evaluateFormula(WorkbookEvaluator.java:525)
10-22 17:13:15.177: E/AndroidRuntime(26300): at org.apache.poi.ss.formula.WorkbookEvaluator.evaluateAny(WorkbookEvaluator.java:288)
10-22 17:13:15.177: E/AndroidRuntime(26300): at org.apache.poi.ss.formula.WorkbookEvaluator.evaluateReference(WorkbookEvaluator.java:702)
10-22 17:13:15.177: E/AndroidRuntime(26300): at org.apache.poi.ss.formula.SheetRefEvaluator.getEvalForCell(SheetRefEvaluator.java:51)
10-22 17:13:15.177: E/AndroidRuntime(26300): at org.apache.poi.ss.formula.LazyAreaEval.getRelativeValue(LazyAreaEval.java:51)
10-22 17:13:15.177: E/AndroidRuntime(26300): at org.apache.poi.ss.formula.eval.AreaEvalBase.getValue(AreaEvalBase.java:109)
10-22 …
Run Code Online (Sandbox Code Playgroud) java ×4
apache-poi ×2
apache-spark ×2
node.js ×2
pyspark ×2
python ×2
android ×1
cassandra ×1
csv ×1
gson ×1
interface ×1
java-8 ×1
json ×1
keras ×1
logging ×1
pyspark-sql ×1
socket.io ×1
tensorflow ×1