-
-
Notifications
You must be signed in to change notification settings - Fork 213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
It seems a breaking change was introduced after v0.6.14 #1029
Comments
I'm not sure which specific PR caused this, but we don't generally commit to the returned gradient type and may provide semantically equivalent alternatives for performance reasons. Since the Is returning fills problematic for your use case? |
It does generically consider Fill a valid gradient, e.g. this hasn't changed: julia> using Zygote
julia> W = [1,2,3]; gradient(() -> sum(W), Params([W]))
Grads(...)
julia> ans[W]
3-element Fill{Int64}, with entries equal to 1 But something is causing this to propagate further than it used to. It might be #1001, but I don't see why right now. You could also argue that |
Yeah, that's my concern. |
didn't we fix this? |
I'm afraid not. Most optimizers in https://github.com/FluxML/Flux.jl/blob/master/src/optimise/optimisers.jl will modify Δ in-place. |
I believe FluxML/Flux.jl#1613 addresses that, but isn't in a tagged release yet. |
I see. Thanks! |
a new Flux version has been tagged |
With
v0.6.14
With the newest
v0.6.16
:Note the second grads returned a FilledArray instead of a vector.
JuliaReinforcementLearning/ReinforcementLearning.jl#370
cc @pilgrimygy
The text was updated successfully, but these errors were encountered: