小编edu*_*ant的帖子

Chrome - org.openqa.selenium.WebDriverException:未知错误:无法在driver.manage()获取自动化扩展.window().maximize();

我遇到了Chrome浏览器抛出的一种非常不寻常的错误

当我尝试使用下面的代码行来最大化chrome时

driver.manage().window().maximize();
Run Code Online (Sandbox Code Playgroud)

我收到了以下错误

org.openqa.selenium.WebDriverException: unknown error: cannot get automation extension
from unknown error: page could not be found: chrome-extension://aapnijgdinlhnhlmodcfapnahmbfebeb/_generated_background_page.html
(Session info: chrome=57.0.2987.110)
(Driver info: chromedriver=2.27.440174 (e97a722caafc2d3a8b807ee115bfb307f7d2cfd9),platform=Windows NT 6.3.9600 x86_64) (WARNING: The server did not provide any stacktrace information)
Command duration or timeout: 10.05 seconds
Run Code Online (Sandbox Code Playgroud)

通过这个例子,我做了以下的事情

1. Updated Chrome driver to latest i.e 2.28 for my Chrome version 
   57.0.2987.110 (64-bit)
2. uninstalled and re-installed Chrome
3. did a project build up in Eclipse even created a new workspace …
Run Code Online (Sandbox Code Playgroud)

selenium google-chrome webdriver selenium-chromedriver selenium-webdriver

6
推荐指数
1
解决办法
2万
查看次数

等到文本框硒中不存在文本

以下是我试图自动化的场景:

1) Some text is already present in Textbox.
2) Click on Radio button.
3) Processing popup is displayed for few seconds. After popup disappears the textbox 
   becomes blank
4) After textbox is blank then I have to enter different value in text box.
Run Code Online (Sandbox Code Playgroud)

请帮助我,如何等到文本框值为空。

我正在使用 IE 驱动程序进行自动化。

提前致谢

java selenium selenium-webdriver

5
推荐指数
1
解决办法
5579
查看次数

了解 Spark DAG 执行

我想根据Spark官方文档了解Spark DAG模型。Spark 中的所有转换都是惰性的,默认情况下,每次对每个转换后的 RDD 运行操作时都可能会重新计算。所以我写了一个如下的小程序:

scala> val lines = sc.textFile("C:\\Spark\\README.md")
lines: org.apache.spark.rdd.RDD[String] = C:\Spark\README.md MapPartitionsRDD[1] at textFile at <console>:24

scala> val breakLInes = lines.flatMap(line=>line.split(" "))
breakLInes: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[2] at flatMap at <console>:26

scala> val createTuple = breakLInes.map(line=>(line,1))
createTuple: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[3] at map at <console>:28

scala> val wordCount = createTuple.reduceByKey
reduceByKey   reduceByKeyLocally

scala> val wordCount = createTuple.reduceByKey(_+_)
wordCount: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[4] at reduceByKey at <console>:30

scala> wordCount.first
res0: (String, Int) = (package,1)
Run Code Online (Sandbox Code Playgroud)

现在转到下面的 Spark UI,这是我的第一个操作的 …

scala bigdata apache-spark

5
推荐指数
0
解决办法
279
查看次数

导入错误:没有名为“onnx_backend”的模块?

我已经通过此 URL https://github.com/onnx/onnx安装了 ONNx ,现在尝试在此处运行一些模型https://github.com/onnx/models#face_detection,问题是导入时:

import numpy as np
import onnx
Run Code Online (Sandbox Code Playgroud)

它有效,但是当我尝试导入时

import onnx_backend as backend
Run Code Online (Sandbox Code Playgroud)

它给了我以下错误

Python 3.5.2 (default, Nov 23 2017, 16:37:01) 
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import onnx_backend as backend
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named 'onnx_backend'
Run Code Online (Sandbox Code Playgroud)

否则我可以加载模型而不会出现任何错误;如何纠正导入错误?

computer-vision python-3.x onnx

3
推荐指数
1
解决办法
4672
查看次数

Pandas AttributeError: 'DataFrame' 对象没有属性 'Datetime'

我正在使用 holt Winter 方法从这里获得帮助。我的数据格式是

 Year       Rate
0  2013  34.700000
1  2013  34.666667
2  2013  34.600000
3  2014  35.300000
4  2014  34.180000
Run Code Online (Sandbox Code Playgroud)

下面是我的代码

import pandas as pd 

#Importing data

df = pd.read_csv('/home/rajnish.kumar/eclipse-workspace/ShivShakti/Result/weeklyDatarateyearonly/part-00000-971f46d7-a97d-4a7e-be41-dc840c2d0618-c000.csv')

df.Timestamp = pd.to_datetime(df.Datetime,format='%Y') 
Run Code Online (Sandbox Code Playgroud)

但我收到此错误:

AttributeError: 'DataFrame' 对象没有属性 'Datetime'

python dataframe pandas

1
推荐指数
1
解决办法
2万
查看次数