我正在尝试将数据加载到 Colab 笔记本中,其中(平面)目录包含一堆 jpg 图像,标签类包含在单独的 csv 文件中,使用 tf.keras.preprocessing.image_dataset_from_directory。
根据文档:
Either "inferred" (labels are generated from the directory structure), or a list/tuple of integer labels of the same size as the number of image files found in the directory. Labels should be sorted according to the alphanumeric order of the image file paths (obtained via os.walk(directory) in Python).
我使用 Pandas 读取 csv 并使用以下内容将其转换为列表,并将 train_labels 作为标签参数传入:
labels = pd.read_csv(_URL)
train_labels = labels.values[:,1].tolist()
print("Total labels:", len(train_labels))
print(train_labels)
>>> Total labels: 1164
>>> [1, …Run Code Online (Sandbox Code Playgroud) 我有一个(GCP)云函数,旨在聚合每小时数据并写入 Cloud Bigtable,但它似乎返回消息:“函数执行花费了 100 毫秒,完成状态:正常”,然后完成完整代码,随后线路有时运行,有时不运行。如果有人有这方面的经验并能提供建议,那就太好了,谢谢!
当我运行脚本时,它在我的本地计算机上运行,仅在云函数中运行,并且我不确定是什么触发了代码的终止。我尝试添加 try/catch 块,但它也没有引发任何错误。代码的主要部分转载如下:
const Bigtable = require('@google-cloud/bigtable');
const bigtableOptions = { projectId: process.env.PROJECT_ID };
const bigtable = new Bigtable(bigtableOptions);
const cbt = bigtable.instance(process.env.BIGTABLE_INSTANCE);
const async = require("async");
const moment = require("moment");
require("moment-round");
const bigtableFetchRawDataForDmac = require("./fetchData").bigtableFetchRawDataForDmac;
exports.patchJob = (event, context) => {
const pubsubMsg = Buffer.from(event.data, 'base64').toString();
const jsonMsg = tryParseJSON(pubsubMsg); // msg in format { time: "2018-12-24T02:00:00.000Z", dmac: ["abc", "def", "ghi] }
if(!jsonMsg) return;
else {
if(!jsonMsg.time) {
console.log("Time not provided");
// res.status(400).json({ err: …Run Code Online (Sandbox Code Playgroud)