我无法通过 Ubuntu EC2 实例执行 sudo apt-get update。当我跑步时——
sudo apt-get update
Run Code Online (Sandbox Code Playgroud)
我收到以下错误:-
Err:1 http://eu-central-1.ec2.archive.ubuntu.com/ubuntu bionic InRelease
Could not connect to eu-central-1.ec2.archive.ubuntu.com:80 (52.59.228.109), connection timed out Could not connect to eu-central-1.ec2.archive.ubuntu.com:80 (52.59.244.233), connection timed out Could not connect to eu-central-1.ec2.archive.ubuntu.com:80 (18.196.1.133), connection timed out Could not connect to eu-central-1.ec2.archive.ubuntu.com:80 (35.158.129.174), connection timed out Could not connect to eu-central-1.ec2.archive.ubuntu.com:80 (35.159.12.228), connection timed out Could not connect to eu-central-1.ec2.archive.ubuntu.com:80 (52.59.220.169), connection timed out
Err:2 http://eu-central-1.ec2.archive.ubuntu.com/ubuntu bionic-updates InRelease
Unable to connect to eu-central-1.ec2.archive.ubuntu.com:http:
Err:3 http://eu-central-1.ec2.archive.ubuntu.com/ubuntu …Run Code Online (Sandbox Code Playgroud) 我正在尝试将 lambda 函数的日志写入由 terraform 创建的 CloudWatch 日志组。
这是 lambda 策略 json -
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1580216411252",
"Action": [
"logs:CreateLogStream",
"logs:CreateLogDelivery",
"logs:PutLogEvents"
],
"Effect": "Allow",
"Resource": "arn:aws:logs:*:*:*"
}
]
}
Run Code Online (Sandbox Code Playgroud)
这是 lambda 假设策略 json -
{
"Version": "2012-10-17",
"Statement": [{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}]
}
Run Code Online (Sandbox Code Playgroud)
我已将其添加到 lambda.tf 文件中 -
resource "aws_cloudwatch_log_group" "example" {
name = "/test/logs/${var.lambda_function_name}"
}
Run Code Online (Sandbox Code Playgroud)
尽管 CloudWatch 日志组“/test/logs/${var.lambda_function_name}”是通过 terraform 创建的,但我无法将 lambda 函数的日志写入该组。
如果我将 lambda 策略 json 更改为 …
amazon-web-services aws-lambda terraform terraform-provider-aws
我正在尝试从https://medium.com/linagora-engineering/making-image-classification-simple-with-spark-deep-learning-f654a8b876b8复制一个深度学习项目。我正在开发 Spark 1.6.3 版。我已经安装了 keras 和 tensorflow。但是每次我尝试从 sparkdl 导入时,它都会引发错误。我正在研究 Pyspark。当我运行这个时:-
from sparkdl import readImages
Run Code Online (Sandbox Code Playgroud)
我收到此错误:-
File "C:\Users\HP\AppData\Local\Temp\spark-802a2258-3089-4ad7-b8cb-
6815cbbb019a\userFiles-c9514201-07fa-45f9-9fd8-
c8a3a0b4bf70\databricks_spark-deep-learning-0.1.0-spark2.1-
s_2.11.jar\sparkdl\transformers\keras_image.py", line 20, in <module>
ImportError: cannot import name 'TypeConverters'
Run Code Online (Sandbox Code Playgroud)
有人可以帮忙吗?
我已经从 URL 下载了一个文件到 AWS Lambda 的 /tmp 目录中(因为这是 Lambda 中唯一可写的路径)。
我的动机是创建一个 Alexa Skill,它将从 URL 下载文件。因此我创建了一个 lambda 函数。
如何从 lambda 中的 /tmp 文件夹访问下载的文件?
我的代码是:-
#!/usr/bin/python
# -*- coding: utf-8 -*-
from __future__ import print_function
import xml.etree.ElementTree as etree
from datetime import datetime as dt
import os
import urllib
import requests
from urllib.parse import urlparse
def lambda_handler(event, context):
""" Route the incoming request based on type (LaunchRequest, IntentRequest,
etc.) The JSON body of the request is provided in the event parameter.
""" …Run Code Online (Sandbox Code Playgroud) 我正在尝试从 URL 下载文件并将该文件上传到 S3 存储桶中。我的代码如下-
#!/usr/bin/python
# -*- coding: utf-8 -*-
from __future__ import print_function
import xml.etree.ElementTree as etree
from datetime import datetime as dt
import os
import urllib
import requests
import boto3
from botocore.client import Config
from urllib.parse import urlparse
def lambda_handler(event, context):
""" Route the incoming request based on type (LaunchRequest, IntentRequest,
etc.) The JSON body of the request is provided in the event parameter.
"""
print('event.session.application.applicationId=' + event['session'
]['application']['applicationId'])
# if (event['session']['application']['applicationId'] !=
# "amzn1.echo-sdk-ams.app.[unique-value-here]"):
# raise …Run Code Online (Sandbox Code Playgroud) 我有一个 python 字典。我想将它保存为 AWS S3 中的泡菜对象。
我正在尝试这个 -
import boto3
import pickle
#Connect to S3 default profile
s3 = boto3.client('s3')
serializedMyData = pickle.dumps(myDictionary)
s3.put_object(Bucket='mytestbucket',Key='myDictionary')
Run Code Online (Sandbox Code Playgroud)
脚本成功运行,我在 S3 中得到一个名为 * myDictionary* 的文件。但它不是泡菜,它有 0 个字节。
我稍微编辑了我的代码 -
import boto3
import pickle
#Connect to S3 default profile
s3 = boto3.client('s3')
serializedMyData = pickle.dumps(myDictionary)
s3.put_object(Bucket='mytestbucket',Key='myDictionary').put(Body=serializedMyData)
Run Code Online (Sandbox Code Playgroud)
但是后来我收到了这个错误 -
AttributeError: 'dict' object has no attribute 'put'
Run Code Online (Sandbox Code Playgroud)
我该怎么办 ?
python ×3
amazon-s3 ×2
aws-lambda ×2
amazon-ec2 ×1
apache-spark ×1
apt-get ×1
boto3 ×1
dictionary ×1
pickle ×1
pyspark ×1
terraform ×1