如何withTrashed应用于hasManyThrough关系?
$this->hasManyThrough('App\Message', 'App\Deal')->withTrashed();
Run Code Online (Sandbox Code Playgroud)
返回
调用未定义的方法 Illuminate\Database\Query\Builder::withTrashed()
当我在做:
$messages = Auth::user()->messages()->with('deal')->orderBy('created_at', 'DESC')->get();`
Run Code Online (Sandbox Code Playgroud)
这是我的交易模型:
<?php
namespace App;
use Illuminate\Database\Eloquent\Model;
use Illuminate\Database\Eloquent\SoftDeletes;
class Deal extends Model
{
use SoftDeletes;
/* ... */
protected $dates = ['deleted_at'];
public function user() {
return $this->belongsTo('App\User');
}
public function messages() {
return $this->hasMany('App\Message'); // I've tried to put withTrashed() here, there is no error but it doesn't include soft deleting items.
}
}
Run Code Online (Sandbox Code Playgroud) 我想检查var是数组还是Dict.
typeof(var) == Dict
typeof(var) == Array
Run Code Online (Sandbox Code Playgroud)
但它不起作用因为typeof太精确了:Dict {ASCIIString,Int64}.什么是最好的方式?
我想得到一个数据帧的行数.我可以实现这一目标size(myDataFrame)[1].有更干净的方式吗?
我对 Laravel 5.3 的新Gate::allows方法有问题。
这是我AuthServiceProvider.php的测试:
<?php
namespace App\Providers;
use Illuminate\Support\Facades\Gate;
use Illuminate\Foundation\Support\Providers\AuthServiceProvider as ServiceProvider;
class AuthServiceProvider extends ServiceProvider
{
/**
* The policy mappings for the application.
*
* @var array
*/
protected $policies = [
'App\Model' => 'App\Policies\ModelPolicy',
];
/**
* Register any authentication / authorization services.
*
* @return void
*/
public function boot()
{
$this->registerPolicies();
Gate::define('settings', function ($user)
{
return true;
});
}
}
Run Code Online (Sandbox Code Playgroud)
通常,无论用户的角色如何,每个人都可以访问设置。
但这总是显示“否”而不是“确定”。
<?php
namespace App\Http\Controllers;
use Gate;
use Illuminate\Http\Request; …Run Code Online (Sandbox Code Playgroud) 在scikit-learn 0.20之前,我们可以result.grid_scores_[result.best_index_]用来获取标准偏差。(它返回为例:mean: 0.76172, std: 0.05225, params: {'n_neighbors': 21})
scikit学习0.20以获得最佳分数的标准偏差的最佳方法是什么?
python scikit-learn cross-validation grid-search data-science
我正在尝试使用PyTorch构建一个非常简单的LSTM自动编码器。我总是用相同的数据训练它:
x = torch.Tensor([[0.0], [0.1], [0.2], [0.3], [0.4]])
Run Code Online (Sandbox Code Playgroud)
我建立我的模型下面这个链接:
inputs = Input(shape=(timesteps, input_dim))
encoded = LSTM(latent_dim)(inputs)
decoded = RepeatVector(timesteps)(encoded)
decoded = LSTM(input_dim, return_sequences=True)(decoded)
sequence_autoencoder = Model(inputs, decoded)
encoder = Model(inputs, encoded)
Run Code Online (Sandbox Code Playgroud)
我的代码正在运行,没有错误,但y_pred收敛到:
tensor([[[0.2]],
[[0.2]],
[[0.2]],
[[0.2]],
[[0.2]]], grad_fn=<StackBackward>)
Run Code Online (Sandbox Code Playgroud)
这是我的代码:
import torch
import torch.nn as nn
import torch.optim as optim
class LSTM(nn.Module):
def __init__(self, input_dim, latent_dim, batch_size, num_layers):
super(LSTM, self).__init__()
self.input_dim = input_dim
self.latent_dim = latent_dim
self.batch_size = batch_size
self.num_layers = num_layers
self.encoder = nn.LSTM(self.input_dim, self.latent_dim, self.num_layers) …Run Code Online (Sandbox Code Playgroud) 我保存的 state_dict 不包含我模型中的所有层。如何忽略 state_dict 错误中的缺失键并初始化剩余的权重?
python ×3
julia ×2
laravel ×2
laravel-5 ×2
pytorch ×2
data-science ×1
dataframe ×1
eloquent ×1
grid-search ×1
laravel-5.3 ×1
lstm ×1
php ×1
scikit-learn ×1