Convolutional LSTM for spatial forecasting



This submit is the primary in a unfastened sequence exploring forecasting of spatially-determined knowledge over time. By spatially-determined I imply that regardless of the portions we’re attempting to foretell – be they univariate or multivariate time sequence, of spatial dimensionality or not – the enter knowledge are given on a spatial grid.

For instance, the enter might be atmospheric measurements, resembling sea floor temperature or stress, given at some set of latitudes and longitudes. The goal to be predicted may then span that very same (or one other) grid. Alternatively, it might be a univariate time sequence, like a meteorological index.

However wait a second, it’s possible you’ll be considering. For time-series prediction, we’ve got that time-honored set of recurrent architectures (e.g., LSTM, GRU), proper? Proper. We do; however, as soon as we feed spatial knowledge to an RNN, treating totally different areas as totally different enter options, we lose a vital structural relationship. Importantly, we have to function in each area and time. We would like each: recurrence relations and convolutional filters. Enter convolutional RNNs.

What to anticipate from this submit

Right now, we gained’t bounce into real-world functions simply but. As an alternative, we’ll take our time to construct a convolutional LSTM (henceforth: convLSTM) in torch. For one, we’ve got to – there is no such thing as a official PyTorch implementation.

What’s extra, this submit can function an introduction to constructing your personal modules. That is one thing it’s possible you’ll be accustomed to from Keras or not – relying on whether or not you’ve used customized fashions or quite, most well-liked the declarative outline -> compile -> match model. (Sure, I’m implying there’s some switch occurring if one involves torch from Keras customized coaching. Syntactic and semantic particulars could also be totally different, however each share the object-oriented model that enables for nice flexibility and management.)

Final however not least, we’ll additionally use this as a hands-on expertise with RNN architectures (the LSTM, particularly). Whereas the overall idea of recurrence could also be straightforward to understand, it’s not essentially self-evident how these architectures ought to, or may, be coded. Personally, I discover that impartial of the framework used, RNN-related documentation leaves me confused. What precisely is being returned from calling an LSTM, or a GRU? (In Keras this depends upon the way you’ve outlined the layer in query.) I think that when we’ve determined what we need to return, the precise code gained’t be that sophisticated. Consequently, we’ll take a detour clarifying what it’s that torch and Keras are giving us. Implementing our convLSTM can be much more easy thereafter.

A torch convLSTM

The code mentioned right here could also be discovered on GitHub. (Relying on if you’re studying this, the code in that repository could have developed although.)

My start line was one of many PyTorch implementations discovered on the web, specifically, this one. When you seek for “PyTorch convGRU” or “PyTorch convLSTM”, one can find beautiful discrepancies in how these are realized – discrepancies not simply in syntax and/or engineering ambition, however on the semantic degree, proper on the heart of what the architectures could also be anticipated to do. As they are saying, let the client beware. (Concerning the implementation I ended up porting, I’m assured that whereas quite a few optimizations can be attainable, the essential mechanism matches my expectations.)

What do I anticipate? Let’s method this job in a top-down method.

Enter and output

The convLSTM’s enter can be a time sequence of spatial knowledge, every statement being of dimension (time steps, channels, peak, width).

Evaluate this with the standard RNN enter format, be it in torch or Keras. In each frameworks, RNNs anticipate tensors of dimension (timesteps, input_dim). input_dim is (1) for univariate time sequence and larger than (1) for multivariate ones. Conceptually, we could match this to convLSTM’s channels dimension: There might be a single channel, for temperature, say – or there might be a number of, resembling for stress, temperature, and humidity. The 2 further dimensions present in convLSTM, peak and width, are spatial indexes into the information.

In sum, we would like to have the ability to move knowledge that:

  • encompass a number of options,

  • evolve in time, and

  • are listed in two spatial dimensions.

How concerning the output? We would like to have the ability to return forecasts for as many time steps as we’ve got within the enter sequence. That is one thing that torch RNNs do by default, whereas Keras equivalents don’t. (You must move return_sequences = TRUE to acquire that impact.) If we’re thinking about predictions for only a single cut-off date, we will all the time choose the final time step within the output tensor.

Nonetheless, with RNNs, it’s not all about outputs. RNN architectures additionally carry by way of hidden states.

What are hidden states? I rigorously phrased that sentence to be as basic as attainable – intentionally circling across the confusion that, in my opinion, usually arises at this level. We’ll try and clear up a few of that confusion in a second, however let’s first end our high-level necessities specification.

We would like our convLSTM to be usable in numerous contexts and functions. Varied architectures exist that make use of hidden states, most prominently maybe, encoder-decoder architectures. Thus, we would like our convLSTM to return these as properly. Once more, that is one thing a torch LSTM does by default, whereas in Keras it’s achieved utilizing return_state = TRUE.

Now although, it truly is time for that interlude. We’ll kind out the methods issues are referred to as by each torch and Keras, and examine what you get again from their respective GRUs and LSTMs.

Interlude: Outputs, states, hidden values … what’s what?

For this to stay an interlude, I summarize findings on a excessive degree. The code snippets within the appendix present methods to arrive at these outcomes. Closely commented, they probe return values from each Keras and torch GRUs and LSTMs. Operating these will make the upcoming summaries appear rather a lot much less summary.

First, let’s have a look at the methods you create an LSTM in each frameworks. (I’ll usually use LSTM because the “prototypical RNN instance”, and simply point out GRUs when there are variations vital within the context in query.)

In Keras, to create an LSTM it’s possible you’ll write one thing like this:

lstm <- layer_lstm(models = 1)

The torch equal can be:

lstm <- nn_lstm(
  input_size = 2, # variety of enter options
  hidden_size = 1 # variety of hidden (and output!) options
)

Don’t deal with torch‘s input_size parameter for this dialogue. (It’s the variety of options within the enter tensor.) The parallel happens between Keras’ models and torch’s hidden_size. When you’ve been utilizing Keras, you’re in all probability considering of models because the factor that determines output dimension (equivalently, the variety of options within the output). So when torch lets us arrive on the identical end result utilizing hidden_size, what does that imply? It implies that by some means we’re specifying the identical factor, utilizing totally different terminology. And it does make sense, since at each time step present enter and former hidden state are added:

[
mathbf{h}_t = mathbf{W}_{x}mathbf{x}_t + mathbf{W}_{h}mathbf{h}_{t-1}
]

Now, about these hidden states.

When a Keras LSTM is outlined with return_state = TRUE, its return worth is a construction of three entities referred to as output, reminiscence state, and carry state. In torch, the identical entities are known as output, hidden state, and cell state. (In torch, we all the time get all of them.)

So are we coping with three various kinds of entities? We’re not.

The cell, or carry state is that particular factor that units aside LSTMs from GRUs deemed accountable for the “lengthy” in “lengthy short-term reminiscence”. Technically, it might be reported to the person in any respect cut-off dates; as we’ll see shortly although, it’s not.

What about outputs and hidden, or reminiscence states? Confusingly, these actually are the identical factor. Recall that for every enter merchandise within the enter sequence, we’re combining it with the earlier state, leading to a brand new state, to be made used of within the subsequent step:

[
mathbf{h}_t = mathbf{W}_{x}mathbf{x}_t + mathbf{W}_{h}mathbf{h}_{t-1}
]

Now, say that we’re thinking about taking a look at simply the ultimate time step – that’s, the default output of a Keras LSTM. From that viewpoint, we will contemplate these intermediate computations as “hidden”. Seen like that, output and hidden states really feel totally different.

Nonetheless, we will additionally request to see the outputs for each time step. If we accomplish that, there is no such thing as a distinction – the outputs (plural) equal the hidden states. This may be verified utilizing the code within the appendix.

Thus, of the three issues returned by an LSTM, two are actually the identical. How concerning the GRU, then? As there is no such thing as a “cell state”, we actually have only one kind of factor left over – name it outputs or hidden states.

Let’s summarize this in a desk.

Desk 1: RNN terminology. Evaluating torch-speak and Keras-speak. In row 1, the phrases are parameter names. In rows 2 and three, they’re pulled from present documentation.

Variety of options within the output

This determines each what number of output options there are and the dimensionality of the hidden states.

hidden_size models

Per-time-step output; latent state; intermediate state …

This might be named “public state” within the sense that we, the customers, are capable of acquire all values.

hidden state reminiscence state

Cell state; interior state … (LSTM solely)

This might be named “non-public state” in that we’re capable of acquire a price just for the final time step. Extra on that in a second.

cell state carry state

Now, about that public vs. non-public distinction. In each frameworks, we will acquire outputs (hidden states) for each time step. The cell state, nonetheless, we will entry just for the final time step. That is purely an implementation determination. As we’ll see when constructing our personal recurrent module, there are not any obstacles inherent in preserving monitor of cell states and passing them again to the person.

When you dislike the pragmatism of this distinction, you may all the time go together with the mathematics. When a brand new cell state has been computed (based mostly on prior cell state, enter, overlook, and cell gates – the specifics of which we aren’t going to get into right here), it’s remodeled to the hidden (a.ok.a. output) state making use of one more, specifically, the output gate:

[
h_t = o_t odot tanh(c_t)
]

Positively, then, hidden state (output, resp.) builds on cell state, including further modeling energy.

Now it’s time to get again to our authentic objective and construct that convLSTM. First although, let’s summarize the return values obtainable from torch and Keras.

Desk 2: Contrasting methods of acquiring numerous return values in torch vs. Keras. Cf. the appendix for full examples.
entry all intermediate outputs ( = per-time-step outputs) ret[[1]] return_sequences = TRUE
entry each “hidden state” (output) and “cell state” from ultimate time step (solely!) ret[[2]] return_state = TRUE
entry all intermediate outputs and the ultimate “cell state” each of the above return_sequences = TRUE, return_state = TRUE
entry all intermediate outputs and “cell states” from all time steps no method no method

convLSTM, the plan

In each torch and Keras RNN architectures, single time steps are processed by corresponding Cell lessons: There may be an LSTM Cell matching the LSTM, a GRU Cell matching the GRU, and so forth. We do the identical for ConvLSTM. In convlstm_cell(), we first outline what ought to occur to a single statement; then in convlstm(), we construct up the recurrence logic.

As soon as we’re completed, we create a dummy dataset, as reduced-to-the-essentials as will be. With extra complicated datasets, even synthetic ones, chances are high that if we don’t see any coaching progress, there are a whole lot of attainable explanations. We would like a sanity test that, if failed, leaves no excuses. Lifelike functions are left to future posts.

A single step: convlstm_cell

Our convlstm_cell’s constructor takes arguments input_dim , hidden_dim, and bias, similar to a torch LSTM Cell.

However we’re processing two-dimensional enter knowledge. As an alternative of the standard affine mixture of latest enter and former state, we use a convolution of kernel dimension kernel_size. Inside convlstm_cell, it’s self$conv that takes care of this.

Observe how the channels dimension, which within the authentic enter knowledge would correspond to totally different variables, is creatively used to consolidate 4 convolutions into one: Every channel output can be handed to simply one of many 4 cell gates. As soon as in possession of the convolution output, ahead() applies the gate logic, ensuing within the two forms of states it must ship again to the caller.

library(torch)
library(zeallot)

convlstm_cell <- nn_module(
  
  initialize = perform(input_dim, hidden_dim, kernel_size, bias) {
    
    self$hidden_dim <- hidden_dim
    
    padding <- kernel_size %/% 2
    
    self$conv <- nn_conv2d(
      in_channels = input_dim + self$hidden_dim,
      # for every of enter, overlook, output, and cell gates
      out_channels = 4 * self$hidden_dim,
      kernel_size = kernel_size,
      padding = padding,
      bias = bias
    )
  },
  
  ahead = perform(x, prev_states) {

    c(h_prev, c_prev) %<-% prev_states
    
    mixed <- torch_cat(listing(x, h_prev), dim = 2)  # concatenate alongside channel axis
    combined_conv <- self$conv(mixed)
    c(cc_i, cc_f, cc_o, cc_g) %<-% torch_split(combined_conv, self$hidden_dim, dim = 2)
    
    # enter, overlook, output, and cell gates (similar to torch's LSTM)
    i <- torch_sigmoid(cc_i)
    f <- torch_sigmoid(cc_f)
    o <- torch_sigmoid(cc_o)
    g <- torch_tanh(cc_g)
    
    # cell state
    c_next <- f * c_prev + i * g
    # hidden state
    h_next <- o * torch_tanh(c_next)
    
    listing(h_next, c_next)
  },
  
  init_hidden = perform(batch_size, peak, width) {
    
    listing(
      torch_zeros(batch_size, self$hidden_dim, peak, width, machine = self$conv$weight$machine),
      torch_zeros(batch_size, self$hidden_dim, peak, width, machine = self$conv$weight$machine))
  }
)

Now convlstm_cell must be referred to as for each time step. That is completed by convlstm.

Iteration over time steps: convlstm

A convlstm could encompass a number of layers, similar to a torch LSTM. For every layer, we’re capable of specify hidden and kernel sizes individually.

Throughout initialization, every layer will get its personal convlstm_cell. On name, convlstm executes two loops. The outer one iterates over layers. On the finish of every iteration, we retailer the ultimate pair (hidden state, cell state) for later reporting. The interior loop runs over enter sequences, calling convlstm_cell at every time step.

We additionally hold monitor of intermediate outputs, so we’ll be capable to return the entire listing of hidden_states seen throughout the course of. Not like a torch LSTM, we do that for each layer.

convlstm <- nn_module(
  
  # hidden_dims and kernel_sizes are vectors, with one component for every layer in n_layers
  initialize = perform(input_dim, hidden_dims, kernel_sizes, n_layers, bias = TRUE) {
 
    self$n_layers <- n_layers
    
    self$cell_list <- nn_module_list()
    
    for (i in 1:n_layers) {
      cur_input_dim <- if (i == 1) input_dim else hidden_dims[i - 1]
      self$cell_list$append(convlstm_cell(cur_input_dim, hidden_dims[i], kernel_sizes[i], bias))
    }
  },
  
  # we all the time assume batch-first
  ahead = perform(x) {
    
    c(batch_size, seq_len, num_channels, peak, width) %<-% x$dimension()
   
    # initialize hidden states
    init_hidden <- vector(mode = "listing", size = self$n_layers)
    for (i in 1:self$n_layers) {
      init_hidden[[i]] <- self$cell_list[[i]]$init_hidden(batch_size, peak, width)
    }
    
    # listing containing the outputs, of size seq_len, for every layer
    # this is identical as h, at every step within the sequence
    layer_output_list <- vector(mode = "listing", size = self$n_layers)
    
    # listing containing the final states (h, c) for every layer
    layer_state_list <- vector(mode = "listing", size = self$n_layers)

    cur_layer_input <- x
    hidden_states <- init_hidden
    
    # loop over layers
    for (i in 1:self$n_layers) {
      
      # each layer's hidden state begins from 0 (non-stateful)
      c(h, c) %<-% hidden_states[[i]]
      # outputs, of size seq_len, for this layer
      # equivalently, listing of h states for every time step
      output_sequence <- vector(mode = "listing", size = seq_len)
      
      # loop over time steps
      for (t in 1:seq_len) {
        c(h, c) %<-% self$cell_list[[i]](cur_layer_input[ , t, , , ], listing(h, c))
        # hold monitor of output (h) for each time step
        # h has dim (batch_size, hidden_size, peak, width)
        output_sequence[[t]] <- h
      }

      # stack hs forever steps over seq_len dimension
      # stacked_outputs has dim (batch_size, seq_len, hidden_size, peak, width)
      # identical as enter to ahead (x)
      stacked_outputs <- torch_stack(output_sequence, dim = 2)
      
      # move the listing of outputs (hs) to subsequent layer
      cur_layer_input <- stacked_outputs
      
      # hold monitor of listing of outputs or this layer
      layer_output_list[[i]] <- stacked_outputs
      # hold monitor of final state for this layer
      layer_state_list[[i]] <- listing(h, c)
    }
 
    listing(layer_output_list, layer_state_list)
  }
    
)

Calling the convlstm

Let’s see the enter format anticipated by convlstm, and methods to entry its totally different outputs.

Right here is an appropriate enter tensor.

# batch_size, seq_len, channels, peak, width
x <- torch_rand(c(2, 4, 3, 16, 16))

First we make use of a single layer.

mannequin <- convlstm(input_dim = 3, hidden_dims = 5, kernel_sizes = 3, n_layers = 1)

c(layer_outputs, layer_last_states) %<-% mannequin(x)

We get again a listing of size two, which we instantly cut up up into the 2 forms of output returned: intermediate outputs from all layers, and ultimate states (of each varieties) for the final layer.

With only a single layer, layer_outputs[[1]]holds all the layer’s intermediate outputs, stacked on dimension two.

dim(layer_outputs[[1]])
# [1]  2  4  5 16 16

layer_last_states[[1]]is a listing of tensors, the primary of which holds the only layer’s ultimate hidden state, and the second, its ultimate cell state.

dim(layer_last_states[[1]][[1]])
# [1]  2  5 16 16
dim(layer_last_states[[1]][[2]])
# [1]  2  5 16 16

For comparability, that is how return values search for a multi-layer structure.

mannequin <- convlstm(input_dim = 3, hidden_dims = c(5, 5, 1), kernel_sizes = rep(3, 3), n_layers = 3)
c(layer_outputs, layer_last_states) %<-% mannequin(x)

# for every layer, tensor of dimension (batch_size, seq_len, hidden_size, peak, width)
dim(layer_outputs[[1]])
# 2  4  5 16 16
dim(layer_outputs[[3]])
# 2  4  1 16 16

# listing of two tensors for every layer
str(layer_last_states)
# Checklist of three
#  $ :Checklist of two
#   ..$ :Float [1:2, 1:5, 1:16, 1:16]
#   ..$ :Float [1:2, 1:5, 1:16, 1:16]
#  $ :Checklist of two
#   ..$ :Float [1:2, 1:5, 1:16, 1:16]
#   ..$ :Float [1:2, 1:5, 1:16, 1:16]
#  $ :Checklist of two
#   ..$ :Float [1:2, 1:1, 1:16, 1:16]
#   ..$ :Float [1:2, 1:1, 1:16, 1:16]

# h, of dimension (batch_size, hidden_size, peak, width)
dim(layer_last_states[[3]][[1]])
# 2  1 16 16

# c, of dimension (batch_size, hidden_size, peak, width)
dim(layer_last_states[[3]][[2]])
# 2  1 16 16

Now we need to sanity-check this module with the simplest-possible dummy knowledge.

Sanity-checking the convlstm

We generate black-and-white “motion pictures” of diagonal beams successively translated in area.

Every sequence consists of six time steps, and every beam of six pixels. Only a single sequence is created manually. To create that one sequence, we begin from a single beam:

library(torchvision)

beams <- vector(mode = "listing", size = 6)
beam <- torch_eye(6) %>% nnf_pad(c(6, 12, 12, 6)) # left, proper, high, backside
beams[[1]] <- beam

Utilizing torch_roll() , we create a sample the place this beam strikes up diagonally, and stack the person tensors alongside the timesteps dimension.

for (i in 2:6) {
  beams[[i]] <- torch_roll(beam, c(-(i-1),i-1), c(1, 2))
}

init_sequence <- torch_stack(beams, dim = 1)

That’s a single sequence. Due to torchvision::transform_random_affine(), we nearly effortlessly produce a dataset of 100 sequences. Shifting beams begin at random factors within the spatial body, however all of them share that upward-diagonal movement.

sequences <- vector(mode = "listing", size = 100)
sequences[[1]] <- init_sequence

for (i in 2:100) {
  sequences[[i]] <- transform_random_affine(init_sequence, levels = 0, translate = c(0.5, 0.5))
}

enter <- torch_stack(sequences, dim = 1)

# add channels dimension
enter <- enter$unsqueeze(3)
dim(enter)
# [1] 100   6  1  24  24

That’s it for the uncooked knowledge. Now we nonetheless want a dataset and a dataloader. Of the six time steps, we use the primary 5 as enter and attempt to predict the final one.

dummy_ds <- dataset(
  
  initialize = perform(knowledge) {
    self$knowledge <- knowledge
  },
  
  .getitem = perform(i) {
    listing(x = self$knowledge[i, 1:5, ..], y = self$knowledge[i, 6, ..])
  },
  
  .size = perform() {
    nrow(self$knowledge)
  }
)

ds <- dummy_ds(enter)
dl <- dataloader(ds, batch_size = 100)

Here’s a tiny-ish convLSTM, educated for movement prediction:

mannequin <- convlstm(input_dim = 1, hidden_dims = c(64, 1), kernel_sizes = c(3, 3), n_layers = 2)

optimizer <- optim_adam(mannequin$parameters)

num_epochs <- 100

for (epoch in 1:num_epochs) {
  
  mannequin$practice()
  batch_losses <- c()
  
  for (b in enumerate(dl)) {
    
    optimizer$zero_grad()
    
    # last-time-step output from final layer
    preds <- mannequin(b$x)[[2]][[2]][[1]]
  
    loss <- nnf_mse_loss(preds, b$y)
    batch_losses <- c(batch_losses, loss$merchandise())
    
    loss$backward()
    optimizer$step()
  }
  
  if (epoch %% 10 == 0)
    cat(sprintf("nEpoch %d, coaching loss:%3fn", epoch, imply(batch_losses)))
}
Epoch 10, coaching loss:0.008522

Epoch 20, coaching loss:0.008079

Epoch 30, coaching loss:0.006187

Epoch 40, coaching loss:0.003828

Epoch 50, coaching loss:0.002322

Epoch 60, coaching loss:0.001594

Epoch 70, coaching loss:0.001376

Epoch 80, coaching loss:0.001258

Epoch 90, coaching loss:0.001218

Epoch 100, coaching loss:0.001171

Loss decreases, however that in itself will not be a assure the mannequin has discovered something. Has it? Let’s examine its forecast for the very first sequence and see.

For printing, I’m zooming in on the related area within the 24×24-pixel body. Right here is the bottom reality for time step six:

0  0  0  0  0  0  0  0  0  0
0  0  0  0  0  0  0  0  0  0
0  0  1  0  0  0  0  0  0  0
0  0  0  1  0  0  0  0  0  0
0  0  0  0  1  0  0  0  0  0
0  0  0  0  0  1  0  0  0  0
0  0  0  0  0  0  1  0  0  0
0  0  0  0  0  0  0  1  0  0
0  0  0  0  0  0  0  0  0  0
0  0  0  0  0  0  0  0  0  0

And right here is the forecast. This doesn’t look dangerous in any respect, given there was neither experimentation nor tuning concerned.

       [,1]  [,2]  [,3]  [,4]  [,5]  [,6]  [,7]  [,8]  [,9] [,10]
 [1,]  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00     0
 [2,] -0.02  0.36  0.01  0.06  0.00  0.00  0.00  0.00  0.00     0
 [3,]  0.00 -0.01  0.71  0.01  0.06  0.00  0.00  0.00  0.00     0
 [4,] -0.01  0.04  0.00  0.75  0.01  0.06  0.00  0.00  0.00     0
 [5,]  0.00 -0.01 -0.01 -0.01  0.75  0.01  0.06  0.00  0.00     0
 [6,]  0.00  0.01  0.00 -0.07 -0.01  0.75  0.01  0.06  0.00     0
 [7,]  0.00  0.01 -0.01 -0.01 -0.07 -0.01  0.75  0.01  0.06     0
 [8,]  0.00  0.00  0.01  0.00  0.00 -0.01  0.00  0.71  0.00     0
 [9,]  0.00  0.00  0.00  0.01  0.01  0.00  0.03 -0.01  0.37     0
[10,]  0.00  0.00  0.00  0.00  0.00  0.00 -0.01 -0.01 -0.01     0

This could suffice for a sanity test. When you made it until the top, thanks to your persistence! In the very best case, you’ll be capable to apply this structure (or an analogous one) to your personal knowledge – however even when not, I hope you’ve loved studying about torch mannequin coding and/or RNN weirdness 😉

I, for one, am definitely wanting ahead to exploring convLSTMs on real-world issues within the close to future. Thanks for studying!

Appendix

This appendix comprises the code used to create tables 1 and a couple of above.

Keras

LSTM

library(keras)

# batch of three, with 4 time steps every and a single function
enter <- k_random_normal(form = c(3L, 4L, 1L))
enter

# default args
# return form = (batch_size, models)
lstm <- layer_lstm(
  models = 1,
  kernel_initializer = initializer_constant(worth = 1),
  recurrent_initializer = initializer_constant(worth = 1)
)
lstm(enter)

# return_sequences = TRUE
# return form = (batch_size, time steps, models)
#
# notice how for every merchandise within the batch, the worth for time step 4 equals that obtained above
lstm <- layer_lstm(
  models = 1,
  return_sequences = TRUE,
  kernel_initializer = initializer_constant(worth = 1),
  recurrent_initializer = initializer_constant(worth = 1)
  # bias is by default initialized to 0
)
lstm(enter)

# return_state = TRUE
# return form = listing of:
#                - outputs, of form: (batch_size, models)
#                - "reminiscence states" for the final time step, of form: (batch_size, models)
#                - "carry states" for the final time step, of form: (batch_size, models)
#
# notice how the primary and second listing gadgets are an identical!
lstm <- layer_lstm(
  models = 1,
  return_state = TRUE,
  kernel_initializer = initializer_constant(worth = 1),
  recurrent_initializer = initializer_constant(worth = 1)
)
lstm(enter)

# return_state = TRUE, return_sequences = TRUE
# return form = listing of:
#                - outputs, of form: (batch_size, time steps, models)
#                - "reminiscence" states for the final time step, of form: (batch_size, models)
#                - "carry states" for the final time step, of form: (batch_size, models)
#
# notice how once more, the "reminiscence" state present in listing merchandise 2 matches the final-time step outputs reported in merchandise 1
lstm <- layer_lstm(
  models = 1,
  return_sequences = TRUE,
  return_state = TRUE,
  kernel_initializer = initializer_constant(worth = 1),
  recurrent_initializer = initializer_constant(worth = 1)
)
lstm(enter)

GRU

# default args
# return form = (batch_size, models)
gru <- layer_gru(
  models = 1,
  kernel_initializer = initializer_constant(worth = 1),
  recurrent_initializer = initializer_constant(worth = 1)
)
gru(enter)

# return_sequences = TRUE
# return form = (batch_size, time steps, models)
#
# notice how for every merchandise within the batch, the worth for time step 4 equals that obtained above
gru <- layer_gru(
  models = 1,
  return_sequences = TRUE,
  kernel_initializer = initializer_constant(worth = 1),
  recurrent_initializer = initializer_constant(worth = 1)
)
gru(enter)

# return_state = TRUE
# return form = listing of:
#    - outputs, of form: (batch_size, models)
#    - "reminiscence" states for the final time step, of form: (batch_size, models)
#
# notice how the listing gadgets are an identical!
gru <- layer_gru(
  models = 1,
  return_state = TRUE,
  kernel_initializer = initializer_constant(worth = 1),
  recurrent_initializer = initializer_constant(worth = 1)
)
gru(enter)

# return_state = TRUE, return_sequences = TRUE
# return form = listing of:
#    - outputs, of form: (batch_size, time steps, models)
#    - "reminiscence states" for the final time step, of form: (batch_size, models)
#
# notice how once more, the "reminiscence state" present in listing merchandise 2 matches the final-time-step outputs reported in merchandise 1
gru <- layer_gru(
  models = 1,
  return_sequences = TRUE,
  return_state = TRUE,
  kernel_initializer = initializer_constant(worth = 1),
  recurrent_initializer = initializer_constant(worth = 1)
)
gru(enter)

torch

LSTM (non-stacked structure)

library(torch)

# batch of three, with 4 time steps every and a single function
# we are going to specify batch_first = TRUE when creating the LSTM
enter <- torch_randn(c(3, 4, 1))
enter

# default args
# return form = (batch_size, models)
#
# notice: there's an extra argument num_layers that we may use to specify a stacked LSTM - successfully composing two LSTM modules
# default for num_layers is 1 although 
lstm <- nn_lstm(
  input_size = 1, # variety of enter options
  hidden_size = 1, # variety of hidden (and output!) options
  batch_first = TRUE # for simple comparability with Keras
)

nn_init_constant_(lstm$weight_ih_l1, 1)
nn_init_constant_(lstm$weight_hh_l1, 1)
nn_init_constant_(lstm$bias_ih_l1, 0)
nn_init_constant_(lstm$bias_hh_l1, 0)

# returns a listing of size 2, specifically
#   - outputs, of form (batch_size, time steps, hidden_size) - given we specified batch_first
#       Observe 1: If it is a stacked LSTM, these are the outputs from the final layer solely.
#               For our present goal, that is irrelevant, as we're proscribing ourselves to single-layer LSTMs.
#       Observe 2: hidden_size right here is equal to models in Keras - each specify variety of options
#  - listing of:
#    - hidden state for the final time step, of form (num_layers, batch_size, hidden_size)
#    - cell state for the final time step, of form (num_layers, batch_size, hidden_size)
#      Observe 3: For a single-layer LSTM, the hidden states are already offered within the first listing merchandise.

lstm(enter)

GRU (non-stacked structure)

# default args
# return form = (batch_size, models)
#
# notice: there's an extra argument num_layers that we may use to specify a stacked GRU - successfully composing two GRU modules
# default for num_layers is 1 although 
gru <- nn_gru(
  input_size = 1, # variety of enter options
  hidden_size = 1, # variety of hidden (and output!) options
  batch_first = TRUE # for simple comparability with Keras
)

nn_init_constant_(gru$weight_ih_l1, 1)
nn_init_constant_(gru$weight_hh_l1, 1)
nn_init_constant_(gru$bias_ih_l1, 0)
nn_init_constant_(gru$bias_hh_l1, 0)

# returns a listing of size 2, specifically
#   - outputs, of form (batch_size, time steps, hidden_size) - given we specified batch_first
#       Observe 1: If it is a stacked GRU, these are the outputs from the final layer solely.
#               For our present goal, that is irrelevant, as we're proscribing ourselves to single-layer GRUs.
#       Observe 2: hidden_size right here is equal to models in Keras - each specify variety of options
#  - listing of:
#    - hidden state for the final time step, of form (num_layers, batch_size, hidden_size)
#    - cell state for the final time step, of form (num_layers, batch_size, hidden_size)
#       Observe 3: For a single-layer GRU, these values are already offered within the first listing merchandise.
gru(enter)