我的程序运行如下:
exe -p param1 -i param2 -o param3
Run Code Online (Sandbox Code Playgroud)
它崩溃并生成了一个核心转储文件 core.pid
我想通过分析核心转储文件
gdb ./exe -p param1 -i param2 -o param3 core.pid
Run Code Online (Sandbox Code Playgroud)
但是gdb认识到core.pid
gdb输入的参数.
在这种情况下如何分析核心转储文件?
在shell中,我cleanJar
在Impatient/part1目录中输入了gradle .输出如下.错误是" 找不到org.apache.hadoop.mapred.JobConf的类文件 ".为什么编译失败?
:clean UP-TO-DATE
:compileJava
Download http://conjars.org/repo/cascading/cascading-core/2.0.1/cascading-core-2.0.1.pom
Download http://conjars.org/repo/cascading/cascading-hadoop/2.0.1/cascading-hadoop-2.0.1.pom
Download http://conjars.org/repo/riffle/riffle/0.1-dev/riffle-0.1-dev.pom
Download http://repo1.maven.org/maven2/org/slf4j/slf4j-api/1.6.1/slf4j-api-1.6.1.pom
Download http://repo1.maven.org/maven2/org/slf4j/slf4j-parent/1.6.1/slf4j-parent-1.6.1.pom
Download http://repo1.maven.org/maven2/org/slf4j/slf4j-log4j12/1.6.1/slf4j-log4j12-1.6.1.pom
Download http://conjars.org/repo/thirdparty/jgrapht-jdk1.6/0.8.1/jgrapht-jdk1.6-0.8.1.pom
Download http://repo1.maven.org/maven2/org/codehaus/janino/janino/2.5.16/janino-2.5.16.pom
Download http://conjars.org/repo/cascading/cascading-core/2.0.1/cascading-core-2.0.1.jar
Download http://conjars.org/repo/cascading/cascading-hadoop/2.0.1/cascading-hadoop-2.0.1.jar
Download http://conjars.org/repo/riffle/riffle/0.1-dev/riffle-0.1-dev.jar
Download http://repo1.maven.org/maven2/org/slf4j/slf4j-api/1.6.1/slf4j-api-1.6.1.jar
Download http://repo1.maven.org/maven2/org/slf4j/slf4j-log4j12/1.6.1/slf4j-log4j12-1.6.1.jar
Download http://conjars.org/repo/thirdparty/jgrapht-jdk1.6/0.8.1/jgrapht-jdk1.6-0.8.1.jar
Download http://repo1.maven.org/maven2/org/codehaus/janino/janino/2.5.16/janino-2.5.16.jar
/home/is_admin/lab/cascading/Impatient/part1/src/main/java/impatient/Main.java:50: error: cannot access JobConf
Tap inTap = new Hfs( new TextDelimited( true, "\t" ), inPath );
^
class file for org.apache.hadoop.mapred.JobConf not found
1 error
:compileJava FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for …
Run Code Online (Sandbox Code Playgroud) 我在python中使用selenium webdriver自动驱动Firefox,python脚本从Firefox中的selenium IDE插件导出.但是当我运行脚本时会引发错误:
======================================================================
ERROR: test_selenium (__main__.SeleniumTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "selenium_test.py", line 8, in setUp
self.driver = webdriver.Firefox()
File "C:\Python26\lib\site-packages\selenium\webdriver\firefox\webdriver.py", line 46, in __init__
self.binary, timeout),
File "C:\Python26\lib\site-packages\selenium\webdriver\firefox\extension_connection.py", line 46,
in __init__
self.binary.launch_browser(self.profile)
File "C:\Python26\lib\site-packages\selenium\webdriver\firefox\firefox_binary.py", line 44, in lau
nch_browser
self._wait_until_connectable()
File "C:\Python26\lib\site-packages\selenium\webdriver\firefox\firefox_binary.py", line 87, in _wa
it_until_connectable
raise WebDriverException("Can't load the profile. Profile Dir : %s" % self.profile.path)
WebDriverException: Can't load the profile. Profile Dir : c:\users\ataosky\appdata\local\temp\tmpwpz
zrv
----------------------------------------------------------------------
Ran 1 test in 67.876s
FAILED …
Run Code Online (Sandbox Code Playgroud) 我想提取视频文件的信息以获得其I/P/B帧的计数.如何在ffmpeg中做到这一点?或者我应该使用libavformat和libavcodec进行编程吗?非常感谢!
我有两个垫子:
A:
size(1,640)
B:
size(640,480)
Run Code Online (Sandbox Code Playgroud)
我想将A复制到B的第一列,所以我使用A.copyTo(B.col(0))
.但是这个失败了.怎么做?
我正在使用 web2py 编写类似应用程序的搜索引擎。是否可以为一个表单实现两个提交按钮,例如谷歌有两个按钮“搜索”和“我很幸运”。提前致谢。
我从github下载scrapy-redis并按照说明运行它但它失败了并给出了这个错误:
2013-01-04 17:38:50+0800 [-] ERROR: Unhandled error in Deferred:
2013-01-04 17:38:50+0800 [-] Unhandled Error
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/Scrapy-0.16.3-py2.7.egg/scrapy/cmdline.py", line 138, in _run_command
cmd.run(args, opts)
File "/usr/local/lib/python2.7/dist-packages/Scrapy-0.16.3-py2.7.egg/scrapy/commands/crawl.py", line 44, in run
self.crawler.crawl(spider)
File "/usr/local/lib/python2.7/dist-packages/Scrapy-0.16.3-py2.7.egg/scrapy/crawler.py", line 47, in crawl
return self.engine.open_spider(spider, requests)
File "/usr/local/lib/python2.7/dist-packages/Twisted-12.2.0-py2.7-linux-i686.egg/twisted/internet/defer.py", line 1187, in unwindGenerator
return _inlineCallbacks(None, gen, Deferred())
--- <exception caught here> ---
File "/usr/local/lib/python2.7/dist-packages/Twisted-12.2.0-py2.7-linux-i686.egg/twisted/internet/defer.py", line 1045, in _inlineCallbacks
result = g.send(result)
File "/usr/local/lib/python2.7/dist-packages/Scrapy-0.16.3-py2.7.egg/scrapy/core/engine.py", line 218, in open_spider
scheduler = self.scheduler_cls.from_crawler(self.crawler)
exceptions.AttributeError: type object 'Scheduler' …
Run Code Online (Sandbox Code Playgroud) python ×3
opencv ×2
cascading ×1
coredump ×1
debugging ×1
ffmpeg ×1
gdb ×1
gradle ×1
hadoop ×1
html ×1
java ×1
libavcodec ×1
libavformat ×1
linux ×1
redis ×1
scalding ×1
scrapy ×1
selenium ×1
unit-testing ×1
video ×1
web-crawler ×1
web-testing ×1
web2py ×1
webdriver ×1