我尝试将gitlab-CE从10.3.2更新到最新版本(当前为11.4)。它给了我这个诚实安全的错误。
[...]
gitlab preinstall: It seems you are upgrading from 10.x version series
gitlab preinstall: to 11.x series. It is recommended to upgrade
gitlab preinstall: to the last minor version in a major version series first before
gitlab preinstall: jumping to the next major version.
gitlab preinstall: Please follow the upgrade documentation at https://docs.gitlab.com/ee/policy/maintenance.html#upgrade-recommendations
gitlab preinstall: and upgrade to 10.8 first.
dpkg: error processing archive /var/cache/apt/archives/gitlab-ce_11.2.3-ce.0_amd64.deb (--unpack):
subprocess new pre-installation script returned error exit status 1
Errors were encountered while …Run Code Online (Sandbox Code Playgroud) 在LUCENE-5472中,如果术语太长,Lucene会更改为抛出错误,而不是仅记录消息.此错误表明SOLR不接受大于32766的令牌
Caused by: java.lang.IllegalArgumentException: Document contains at least one immense term in field="text" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first immense term is: '[10, 10, 70, 111, 117, 110, 100, 32, 116, 104, 105, 115, 32, 111, 110, 32, 116, 104, 101, 32, 119, 101, 98, 32, 104, 111, 112, 101, 32, 116]...', original message: bytes can be …Run Code Online (Sandbox Code Playgroud) 我有一个将Twitter Stream索引到Elasticsearch中的系统。它已经运行了几个星期。
最近一个错误已经出现了,说:Limit of total fields [1000] in index [dev_tweets] has been exceeded。
我想知道是否有人遇到过同样的问题?
另外,如果我运行此curl:
$ curl -s -XGET http://localhost:9200/dev_tweets/_mapping?pretty | grep type | wc -l
890
Run Code Online (Sandbox Code Playgroud)
它应该给我或多或少的映射中的字段数。字段很多,但不超过1000
为什么,如果我提交,比如说,develop分支,这不会出现在我的贡献中吗?
我预计,如果我将该分支合并到其中,master那么所有提交到develop 其他分支或其他分支的内容都将可见。但情况似乎并非如此。
无论如何要这样做吗?还是必须直接提交master?
与图书馆
import logging
当我使用方法.error(text)或 时.warning(text),记录器将日志级别写入为 INFO、WARNING、ERROR 等。
我想知道是否有办法更改字符串WARNING,例如更改为WARN(并改为更改ERROR为ERR)。
太长了;我想按照我喜欢的方式更改消息的文本日志记录级别(“调试”、“信息”、“警告”、“错误”、“严重”)...有没有办法做到这一点?
我有一个JSON,可以通过时间和使用的情况下更改类可能是unconvenient,因为我需要每次改变它的结构的JSON变化.
例如,如果我有这样的JSON:
val json= """{
"accounts": [
{ "emailAccount": {
"accountName": "YMail",
"username": "USERNAME",
"password": "PASSWORD",
"url": "imap.yahoo.com",
"minutesBetweenChecks": 1,
"usersOfInterest": ["barney", "betty", "wilma"]
}},
{ "emailAccount": {
"accountName": "Gmail",
"username": "USER",
"password": "PASS",
"url": "imap.gmail.com",
"minutesBetweenChecks": 1,
"usersOfInterest": ["pebbles", "bam-bam"]
}}
]
}"""
Run Code Online (Sandbox Code Playgroud)
我可以通过以下方式访问它:
val parsedJSON = parse(json)
parsedJSON.accounts(0).emailAccount.accountName
Run Code Online (Sandbox Code Playgroud) 我想检测“复制到剪贴板”的事件,因此当用户选择字符串或网址后,点击copy to clipboard.
您知道如何在 Android 环境中进行检查吗?
如何从像这样的迭代器中获取
val it = Iterator("one","two","three","four","five")
像地图一样
Map(four -> 4, three -> 5, two -> 3, five -> 4, one -> 3)
var m = Map[String, Int]()
while (it.hasNext) {
val cell = it.next()
m += (cell -> cell.length())
}
Run Code Online (Sandbox Code Playgroud)
这是一个使用的解决方案,var但我想只使用Immutable和val变量.
如果我使用该for yield语句,返回的对象将是一个Iterator[Map],我不希望这样:
val m = for(i<- it if it.hasNext) yield Map(i->i.length())
Run Code Online (Sandbox Code Playgroud) 我安装了Cloudera Manager 5.13.
在第一次安装和运行时YARN ..我有以下错误
Error starting JobHistoryServer
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [hdfs://vmi150132.contaboserver.net:8020/user/history/done]
at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.tryCreatingHistoryDirs(HistoryFileManager.java:682)
at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.createHistoryDirs(HistoryFileManager.java:618)
at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.serviceInit(HistoryFileManager.java:579)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.mapreduce.v2.hs.JobHistory.serviceInit(JobHistory.java:95)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.serviceInit(JobHistoryServer.java:154)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.launchJobHistoryServer(JobHistoryServer.java:229)
at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.main(JobHistoryServer.java:239)
Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:279)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:260)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:240)
Run Code Online (Sandbox Code Playgroud)
[....和其他行的堆栈异常错误]
所以基本上问题是HDFS文件夹的权限.
像:
sudo -u hdfs hdfs dfs -chmod -R 777 /
将修复错误.
但我的问题是......不是不安全吗?为什么Cloudera不处理此权限?