我试图将工件上传到artfactory repo请求,但我得到405错误.我有一个工作的bash脚本来实现这个目标,但我真的需要一个python实现.
蟒蛇
import os
import hashlib
import requests
from requests.auth import HTTPBasicAuth
username = 'me'
password = 'secrets'
target_file = '/home/me/app-1.0.0-snapshot.el6.noarch.rpm'
artifactory_url = 'https://artifactory.company.com/artifactory'
def get_md5(fin):
md5 = hashlib.md5()
with open(fin, 'rb') as f:
for chunk in iter(lambda: f.read(8192), ''):
md5.update(chunk)
return md5.hexdigest()
def get_sha1(fin):
sha1 = hashlib.sha1()
with open(fin, 'rb') as f:
for chunk in iter(lambda: f.read(8192), ''):
sha1.update(chunk)
return sha1.hexdigest()
def upload(fin):
base_file_name = os.path.basename(fin)
md5hash = get_md5(fin)
sha1hash = get_sha1(fin)
headers = {"X-Checksum-Md5": md5hash, "X-Checksum-Sha1": sha1hash} …
Run Code Online (Sandbox Code Playgroud) 我正在考虑迁移到Gatling 2.0.0-M3a,但我在进行基本测试时遇到问题。我遇到的问题是将值映射到 Gatling 2 中的模板文件。 下面的示例显示了我如何在 Gatling 1.5 中实现这一点,但我无法在 2 中弄清楚。
LoginScenario.scala - 适用于gatling 1.5
package StressTesting
import com.excilys.ebi.gatling.core.Predef._
import com.excilys.ebi.gatling.http.Predef._
import Headers._
import akka.util.duration._
import bootstrap._
object LoginScenario {
val scn = scenario("Login")
.feed(csv("user_credentials.csv"))
.exec(
http("login")
.post("/api/login")
.fileBody("loginTemplate",
Map(
"userName" -> "${userName}",
"password" -> "${password}"
)
).asJSON
.headers(post_header)
.check(status.is(200)))
}
Run Code Online (Sandbox Code Playgroud)
LoginScenario.scala - 错误 - 重新设计的版本以适应 Gatling 1.5 和 2 之间的变化
package StressTesting
import io.gatling.core.Predef._
import io.gatling.http.Predef._
import Headers._
import scala.concurrent.duration._
import bootstrap._
import io.gatling.core.session.Expression
object LoginScenario { …
Run Code Online (Sandbox Code Playgroud) 我试图从每个列表中提取业务名称和地址并将其导出到-csv,但我遇到输出csv的问题.我认为bizs = hxs.select("// div [@ class ='listing_content']")可能会导致问题.
yp_spider.py
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from yp.items import Biz
class MySpider(BaseSpider):
name = "ypages"
allowed_domains = ["yellowpages.com"]
start_urls = ["http://www.yellowpages.com/sanfrancisco/restaraunts"]
def parse(self, response):
hxs = HtmlXPathSelector(response)
bizs = hxs.select("//div[@class='listing_content']")
items = []
for biz in bizs:
item = Biz()
item['name'] = biz.select("//h3/a/text()").extract()
item['address'] = biz.select("//span[@class='street-address']/text()").extract()
print item
items.append(item)
Run Code Online (Sandbox Code Playgroud)
items.py
# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/topics/items.html
from scrapy.item import Item, …
Run Code Online (Sandbox Code Playgroud)