相关帖子:1)docker postgres pgadmin 本地连接
2)https://coderwall.com/p/qsr3yq/postgresql-with-docker-on-os-x(示例中“名称”条目未填写)
有两种方法可以完成这个任务,我使用官方的postgres
方法一:
并运行它
sudo docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -p 5432:5432 -d postgres
Run Code Online (Sandbox Code Playgroud)
然后连接
Name: postgres
Host: localhost
Port: 5432
user
pass
...
Run Code Online (Sandbox Code Playgroud)
方法二:
以。。开始
sudo docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
Run Code Online (Sandbox Code Playgroud)
然后检查容器的ip
sudo docker inspect
Run Code Online (Sandbox Code Playgroud)
说结果
172.17.42.1
Run Code Online (Sandbox Code Playgroud)
然后与 pgAdmin 选项卡连接属性填充信息
Name: postgres
Host: 172.17.42.1
Port: 5432
user
pass
...
Run Code Online (Sandbox Code Playgroud) 我已经将anaconda安装到我的主目录中并添加了PATH os变量的路径
并使用命令在anaconda中安装了ipython notebook
conda install ipython-notebook
Run Code Online (Sandbox Code Playgroud)
它工作正常
之后我打开终端并输入
ipython notebook
Run Code Online (Sandbox Code Playgroud)
据报道
Could not start notebook. Please install ipython-notebook
Run Code Online (Sandbox Code Playgroud)
我在安装上做错了吗?
的输出
conda list | grep ipython
Run Code Online (Sandbox Code Playgroud)
是
ipython 2.3.1 py27_0
ipython-notebook 2.3.1 py27_0
ipython-qtconsole 2.2.0 py27_0
Run Code Online (Sandbox Code Playgroud) 我想自动化执行以下操作:
cd 进入当前目录
cd workdir
Run Code Online (Sandbox Code Playgroud)
建立一个新目录
mkdir mydata
Run Code Online (Sandbox Code Playgroud)
并获取此 mydata 目录的绝对路径
我已阅读说明书
do not use swap
Run Code Online (Sandbox Code Playgroud)
在zookeeper和kafka都有.我知道kafka依赖于页面缓存来保持顺序日志的一部分缓存在内存中,即使它们被写入磁盘也是如此.
但无法理解如何交换可以伤害zk和kafka.
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types …Run Code Online (Sandbox Code Playgroud) 我将lambda用作具有lambda代理集成的AWS API Gateway的后端,并希望将CORS添加到响应标头中。
根据文档:
http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-cors.html
Run Code Online (Sandbox Code Playgroud)
但是,您必须依靠后端返回Access-Control-Allow-Origin标头,因为对于代理集成禁用了集成响应。
如何使用Python在lambda函数中进行编程。
我使用Terraform来管理AWS资源.
Terraform调用已被MFA锁定的管理IAM用户.但terraform apply和terraform destroy不输入唯一的验证码命令,从我的本地计算机成功.
那么,Terraform是否绕过了多因素身份验证?
我在 %python 环境中有一个 Dataframe,并尝试在 %r 环境中使用它。
如何将 %python 下的 Spark 数据帧转换为 %r ?
尝试使用以下代码构建一个kafka消费者
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
// set up consumer
final Properties consumerProps = new Properties();
consumerProps.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, CLUSTER.bootstrapServers());
consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG, "consumer-tutorial");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
io.confluent.kafka.serializers.KafkaAvroSerializer.class);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
io.confluent.kafka.serializers.KafkaAvroSerializer.class);
// transactional API
consumerProps.put(ConsumerConfig.ISOLATION_LEVEL_CONFIG, "read_committed");
// consumer --from-beginning
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "10000");
consumerProps.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
consumerProps.put("zookeeper.connect", CLUSTER.zookeeperConnect());
consumerProps.put("schema.registry.url", CLUSTER.schemaRegistryUrl());
final KafkaConsumer<GenericRecord, GenericRecord> consumer = new KafkaConsumer<GenericRecord, GenericRecord>(consumerProps);
consumer.subscribe(Collections.singletonList(inputTopic));
Run Code Online (Sandbox Code Playgroud)
但因错误而失败
org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:765)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:633)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:615)
at com.telefonica.app.test_consumer.KafkaETLConsumerTest.testRunConsumer(KafkaETLConsumerTest.java:192)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) …Run Code Online (Sandbox Code Playgroud) 我已经在Mac上安装了go
go version
Run Code Online (Sandbox Code Playgroud)
输出:
go version go1.8.1 darwin/amd64
Run Code Online (Sandbox Code Playgroud)
和
go env
Run Code Online (Sandbox Code Playgroud)
输出:
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/MYUSERNAME/go/"
GORACE=""
GOROOT="/usr/local/go"
GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/np/ts5bwp_91ns22l9h751h2j8r0000gn/T/go-build124313959=/tmp/go-build -gno-record-gcc-switches -fno-common"
CXX="clang++"
CGO_ENABLED="1"
PKG_CONFIG="pkg-config"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
Run Code Online (Sandbox Code Playgroud)
当我运行以下go get命令时:
go get -v github.com/miku/esbulk/cmd/esbulk
Run Code Online (Sandbox Code Playgroud)
它既不产生任何输出,也不执行任何操作。没事。
在GOPATH / pkg文件夹中有darwin_amd64文件夹,在darwin_amd64文件夹中有
github.com/miku/esbulk.a
Run Code Online (Sandbox Code Playgroud) apache-kafka ×2
amazon-iam ×1
anaconda ×1
apache-spark ×1
aws-lambda ×1
bash ×1
databricks ×1
docker ×1
go ×1
ipython ×1
linux ×1
multi-factor ×1
nginx ×1
postgresql ×1
pyspark ×1
python ×1
terraform ×1