Keras String Input output representation in RNN Variational autoencoder

I'm having a look at .. Molecular autoencoder lets us interpolate and do gradient-based optimization of compounds https://arxiv.org/pdf/1610.02415.pdf The paper takes an input Smiles string (a text representation of molecule) and then maps it using a variational encoder into 2D latent space. Example Smiles String for hexan-3-ol "CCCC(O)CC" In the paper they pad short strings to 120 characters with spaces. The paper encoded the string using a stack of 1D convolutional networks into a laten

Keras ValueError: Error when checking model input: expected input_node to have 4 dimensions, but got array with shape (0, 1)

Using Keras and getting an error running model.fit, this results in an value error. But not entirely sure how checking the model fails when using fit. ValueError: Error when checking model input: expected input_node to have 4 dimensions, but got array with shape (0, 1) Here is more details from the stack trace: history = st_model.fit(X, y, batch_size=args.batch_size, nb_epoch=args.nb_epoch, verbose=2, validation_data=(X_cv, y_cv), initial_epoch=0) File "/usr/local/lib/python2.7/site-packag

Merge a forward lstm and a backward lstm in Keras

I would like to merge a forward LSTM and a backward LSTM in Keras. The input array of the backward LSTM is different from that of a forward LSTM. Thus, I cannot use keras.layers.Bidirectional. The forward input is (10, 4). The backward input is (12, 4) and it is reversed before put into the model. I would like to reverse it again after LSTM and merge it with the forward. The simplified model is as follows. from lambdawithmask import Lambda as MaskLambda def reverse_func(x, mask=None): r

How to do multi GPU training with Keras?

I want my model to run on multiple GPUs sharing parameters but with different batches of data. Can I do something like that with model.fit()? Is there any other alternative?

Keras Type error when trying to fit a VGG like model

When I try to fit the following model: model = Sequential([ Lambda(vgg_preprocess, input_shape=(3,244,244)), Conv2D(64,3,3, activation='relu'), BatchNormalization(axis=1), Conv2D(64,3,3, activation='relu'), MaxPooling2D(), BatchNormalization(axis=1), Conv2D(128,3,3, activation='relu'), BatchNormalization(axis=1), Conv2D(128,3,3, activation='relu'), MaxPooling2D(), BatchNormalization(axis=1), Conv2D(256,3,3, activation='relu'), BatchNormalizati

Keras: triplet loss with positive and negative sample within batch

I try to refactor my Keras code to use 'Batch Hard' sampling for the triplets, as proposed in https://arxiv.org/pdf/1703.07737.pdf. " the core idea is to form batches by randomly sampling P classes (person identities), and then randomly sampling K images of each class (person), thus resulting in a batch of PK images. Now, for each sample a in the batch, we can select the hardest positive and the hardest negative samples within the batch when forming the triplets for computing the l

How embeddings_initializer of Embeddings instance in keras is set

When we create Embeddings instance in keras, we set the embeddings_initializer variable as initializers.get(embeddings_initializer) to set to set the initial random weights of Keras layers. When I go to line https://github.com/keras-team/keras/blob/45c838cc7a0a5830c0a54a2f58f48fc61950eb68/keras/initializers.py#L488 , to see the definition of get(), there are 3 if else cases, which of those if else cases are executed? The context of asking this question was when are the inital random weights a

Keras How to get the output of intermediate layers which are not connected via Sequential() function?

I am new in Keras, but I worked with pure tensorflow before. I am trying to debug some of the following network (I will just copy a fragment. Loss function, optimizer, etc are unimportant to me for with this code) #Block 1 (Conv,relu,batch) starts with 800 x 400 main_input = LNN.Input(shape=((800,400,5)),name='main_input') enc_conv1 = LNN.Convolution2D(8,3,padding='same',activation='relu')(main_input) enc_bn1 = LNN.BatchNormalization(axis=1)(enc_conv1) #Block 2 (Conv,relu,batch) starts with 40

Keras Embedding Layer

I am using Keras newsgroup example code for text classification. I have saved the trained model using the h5py library. Will the embedding layer also get saved or should I write some extra code when loading the model to use the embedding layer?

fitting a simple image generator in a keras model

I have a keras model that takes an input image and a label value. I have a data generator that reads the image, processes it and feeds it into the net from PIL import Image def my_iterator(): i = 0 while True: img_name = train_df.loc[i,'Image'] img_label = train_df.loc[i,'Id'] img = Image.open('master_train/'+str(img_name)).convert('L') print(img.mode) longer_side = max(img.size) horizontal_padding = (longer_side - img.size[0]) / 2

Error about input dimension in keras reshape

I'm trying to reshape a tensor 'output' of dimension (1,512,512,9) into a tensor 'output_reshaped' of dimension (1,512*512, 9) in Keras in R. This should be the simplest thing ever, but for some reason it is not. This is the code I'm using: output_reshaped = layer_reshape(output ,target_shape = c( w*h, class) ) When I try to fit the model the following error pops up Error in py_call_impl(callable, dots$args, dots$keywords) : ValueError: Error when checking target: expected reshape_16 to

Keras How to solve this problem of Memory error?

So I have this error message that ruins all the fun with my work: Traceback (most recent call last): File "C:\Python\Python36\Scripts\Masterarbeit-1308\CNN - Kopie.py", line 97, in <module> model.fit(np.asarray(X_train), np.asarray(Y_train), batch_size=32, epochs=100, verbose=1, validation_data=(np.asarray(X_test), np.asarray(Y_test))) File "C:\Users\\****\AppData\Roaming\Python\Python36\site-packages\numpy\core\numeric.py", line 492, in asarray return array(a, dtype, copy=Fa

Core ML coremltools AttributeError: module 'keras.applications.mobilenet' has no attribute 'relu6'

We are trying to convert a .h5 Keras model into a .mlmodel model, my code is as follows: from keras.models import load_model import keras from keras.applications import MobileNet from keras.layers import DepthwiseConv2D from keras.utils.generic_utils import CustomObjectScope with CustomObjectScope({'relu6': keras.applications.mobilenet.relu6,'DepthwiseConv2D': keras.applications.mobilenet.DepthwiseConv2D}): model = load_model('CNN_tourist_11.h5', custom_objects={'relu6': MobileNet}) outp

Keras Check perplexity of a Language Model

I created a language model with Keras LSTM and now I want to assess wether it's good so I want to calculate perplexity. What is the best way to calc perplexity of a model in Python?

How many images are generated by keras fit_generator?

I use keras to do image augmentation and segmentation. I want to investigate the number of images generated, so I test the following setting of arguments: (1) set batch_size as 1 in flow_from_directory when define the generator: def myGene(...): ... image_datagen = ImageDataGenerator(**aug_dict) image_generator = image_datagen.flow_from_directory(...,batch_size = 1,..., save_prefix = 'view',...) mask_datagen = ImageDataGenerator(**aug_dict) mask_generator = mask_datagen.flo

Keras tokenizer has no attribute oov_token

Traceback (most recent call last): File "dac.py", line 87, in X_train=load_create_padded_data(X_train=X_train,savetokenizer=False,isPaddingDone=False,maxlen=sequence_length,tokenizer_path='./New_Tokenizer.tkn') File "/home/dpk/Downloads/DAC/New_Utils.py", line 92, in load_create_padded_data X_train=tokenizer.texts_to_sequences(X_train) File "/home/dpk/anaconda2/envs/venv/lib/python2.7/site-packages/keras_preprocessing/text.py", line 278, in texts_to_sequences return list(self.

loss NAN when use keras training ANN classification

I have some data and wanting to classification. <class 'pandas.core.frame.DataFrame'> Int64Index: 2474 entries, 0 to 5961 Data columns (total 4 columns): Age 2474 non-null int64 Pre_Hospitalization_Disposal 2474 non-null object Injury_to_hospital_time 2474 non-null float64 Discharge_results 2474 non-null int64 dtypes: float64(1), int64(2), object(1) memory usage: 96.6+ KB Age, Pre_Hospitalization_Disposal, Injury_to_hospital_time is f

Keras Recurrent Neural Network: Different numpy reshape in these two articles?

The First: https://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/ The above says that we have a sequence of like 96 samples, 1 time-step, and 1 feature. The Second: https://machinelearningmastery.com/reshape-input-data-long-short-term-memory-networks-keras/ The above says that we have a sequence of 1 sample, many time-steps, and 1 feature. What is the difference? E.g., if I measure temperature and pressure every day for 30 days then I assume it

Using Dropout on Convolutional Layers in Keras

I have implemented a convolutional neural network with batch normalization on 1D input signal. My model has a pretty good accuracy of ~80%. Here is the order of my layers: (Conv1D, Batch, ReLU, MaxPooling) repeat 6 times, Conv1D, Batch, ReLU, Dense, Softmax. I have seen several articles saying that I should NOT use dropout on convolutional layers, but I should use batch normalization instead, so I want to experiment with my models by replacing all batch normalization layers with dropout layers

Fine tuning custom keras model

I have a keras model which is trained on 5 classes,The final layers of the model look like so dr_steps = Dropout(0.25)(Dense(128, activation = 'relu')(gap_dr)) out_layer = Dense(5, activation = 'softmax')(dr_steps) model = Model(inputs = [in_lay], outputs = [out_layer]) What I want to do is fine tune this model on an 8 class multilabel problem but I am not sure how to achieve this. This is what I have tried: dr_steps = Dropout(0.25)(Dense(128, activation = 'relu')(gap_dr)) out_layer = Dense(t_y

Why ImageDataGenerator mdoel.fit_generator showing error Function call stack: keras_scratch_graph?

datagen.fit(X_train) history = model.fit_generator(datagen.flow(X_train,Y_train,batch_size=batch_size,), epochs=epochs,validation_data=(X_val,Y_val), verbose=2,steps_per_epoch=X_train.shape[0] // batch_size, callbacks= [learning_rate_reduction]) InternalError: Blas GEMM launch failed : a.shape=(86, 3136), b.shape=(3136, 256), m=86, n=256, k=3136 [[node dense_1/MatMul (defined at

Keras LSTM time series data classification model low accuracy

I'm trying to create a model for time series analysis using LSTM layer, however accuracy is very low even when using Dense layers and no LSTM. The data is time series (synthetic spectrum), which depends on 4 parameters. Changing the parameters enables to use different size datasets, where each sample is more or less different from the other. But no matter the size of dataset accuracy is always as low as 0.0 - 0.32 %. Model with LSTM: print(trainset.shape) print(testset.shape) print(trainlab

Attention with Encoder/Decoder Using Keras

I'm trying to apply this: https://github.com/wanasit/katakana/blob/master/notebooks/Attention-based%20Sequence-to-Sequence%20in%20Keras.ipynb to music generation instead of language translation. But there are more complications with music. Is there a way to identify where the error is coming from or are there any conceptual errors I'm making? def create_network(n_notes, n_durations, embed_size = 100, rnn_units = 256): """ create the structure of the neural network """ encoder_notes_

Keras LSTM Sequence Classification - Loss and validation loss decreases by trival amount and accuracy and validation accuracy remains static

Hoping for a quick second pair of eyes before I officially give up hope on applying deep learning to stock prediction. The goal is to use an LSTM to predict one of two classes. The positive class corresponds to a sequence that led to a price increase of 5% or greater over the next six periods - the negative class corresponds to a sequence that did not. As expected this has led to a bit of class imbalance with the ratio being about 6:1 negative to positive. The problem right now though is that t

Keras Efficient image to array training dataset in Python

I'm training a network that takes an image as an input and outputs a smaller 2D array (say 10x10) of scores for parts of the image. I currently build the training dataset by appending the image to a list (or tensor) of images and the output array to a list (or tensor) of arrays, sorted in the same order and use this as the input and output when training the model. It works, and so far the network I trained on a medium sized dataset predicts well for the relevant cases, but there must be a better

Confusion Matrix From a Keras Multiclass Model

I am building the following multiclass model with Keras with 10 classes. train_dir = '/Users/...' validation_dir = '/Users/...' test_dir = '/Users/...' train_image_generator = ImageDataGenerator(rescale=1./255) train_data_gen = train_image_generator.flow_from_directory(directory=train_dir, shuffle=False, target_size=IMAGE_SHAPE, class_mode='categorical') validation_image_generator = ImageDataGenerator(rescale=1./255) validation_data_gen = validation_image_generator.flow_from_directo

Keras How to load trained weights in branching model

I'm trying to add an extra branch to the original model to improve its capability. To be more specific, the original model looks like this: input1 = Input() x1 = branch1(x1) x1 = residualnet(x1) model = Model(inputs = [input1], outputs = x1) My improvements would be like this: input1 = Input() input2 = Input() x1 = branch1(x1) #1 x2 = branch2(x2) #2 x1 = concat([x1, x2]) x1 = residualnet(x1) #3 model = Model(inputs = [input1, input2], outputs = x1) In this condition, I already have train

Will custom Lambda layer be included in the backpropagating in Keras

So lets suppose I have the following lambda layer: l = Lambda(lambda x: 1/(1+math.e**x)) This is a sigmoid function. Now since I didn't specify the derivative of this function anywhere I am curious whether it will be included in the back-propagation or not. Is there some magical automatic mechanism which does that for me?

Keras Memory issue when retraining dense layers of VGG 16

I wanted to retrain the fully connected layers of VGG 16 for big gray level images (1800x1800), using Keras with Then backend. So I've: created a new VGG with a single color channel and loaded the weights from of the original VGG. add trainable=False to all the convolution layers (the pooling and padding are not trainable by definition) delete the two first dense layers to keep only the output layer with two neurons increase drastically the max pooling dimensions and strides because I work wi

Restricting the output values of layers in Keras

I have defined my MLP in the code below. I want to extract the values of layer_2. def gater(self): dim_inputs_data = Input(shape=(self.train_dim[1],)) dim_svm_yhat = Input(shape=(3,)) layer_1 = Dense(20, activation='sigmoid')(dim_inputs_data) layer_2 = Dense(3, name='layer_op_2', activation='sigmoid', use_bias=False)(layer_1) layer_3 = Dot(1)([layer_2, dim_svm_yhat]) out_layer = Dense(1, activation='tanh')(layer_3) model = Mode

Keras Avoiding vanishing gradient in deep neural networks

I'm taking a look at Keras to try to dive into deep learning. From what I know, stacking just a few dense layers effectively stops back propagation from working due to vanishing gradient problem. I found out that there is a pre-trained VGG-16 neural network you can download and build on top of it. This network has 16 layers so I guess, this is the territory where you hit the vanishing gradient problem. Suppose I wanted to train the network myself in Keras. How should I do it? Should I divide

Keras: feed output as input at next timestep

The goal is to predict a timeseries Y of 87601 timesteps (10 years) and 9 targets. The input features X (exogenous input) are 11 timeseries of 87600 timesteps. The output has one more timestep, as this is the initial value. The output Yt at timestep t depends on the input Xt and on the previous output Yt-1. Hence, the model should look like this: Model layout I could only find this thread on this: LSTM: How to feed the output back to the input? #4068. I tried to implemented this with Keras as

Google colaboratory, Keras : Save model in HDF5 file format and download it to Laptop

I am training small RNN model in Google Collab using GPU.I usually save my model and weights in HDF5 file format. In local machine(laptop), I do it in following procedure sudo pip install h5py model.fit(....) model.save('model1.h5') I load back trained model to make prediction using, from keras.models import load_model model = load_model('model1.h5') I now want to save model in Google Collab, similar format as in above download .h5 file to local machine(PC) make predictions in PC and t

Keras validation_split in ImageData Generator returns 0 images

I'm trying to split my data(images) using ImageDataGenerator in Keras by setting validation_split to some fraction. here is my code #Generate batches of tensor image data with real-time data augmentation, looped over in batches train_DataGen_Augmnt =ImageDataGenerator( rescale=1./255, featurewise_center=True, validation_split=0.2, rotation_range=30, horizontal_flip=True, ) #validation data not augmented! Validation_DataGen = ImageDataGenerator(rescale=1./255) # Fl

Is Conv3D different from Convolution3D in Keras?

For C3D architectures and other C3D-related ones, I am finding different implementations for the 3D convolutional layers in Keras. Sometimes, people use Conv3D and sometimes Convolution3D. Are they different ? Is one better than the other ?

Keras Unknown regularizer: l2_cond When trying to load data from file

I have been getting an error when try to load a model that I trained. model_path = r'I:\\ECGMODELCP\\0.467-0.840-010-0.408-0.860.reg.hdf5' model = keras.models.load_model(model_path) ValueError: Unknown regularizer: l2_cond Ive tried model = keras.models.load_model(model_path, custom_objects={'l2_cond': l2_cond(weight_matrix)}) But get an error of weight_matrix not defined. l2_cond is a custom kernal regularizer that I defined and depends on the weight matrix of the last layer of my

Keras Why the pytorch implementation is so inefficient?

I have implemented a paper about a CNN architecture in both Keras and Pytorch but keras implementation is much more efficient it takes 4 gb of gpu for training with 50000 samples and 10000 validation samples but pytorch one takes all the 12 gb of gpu and i cant even use a validation set ! Optimizer for both of them is sgd with momentum and same settings for both. more info about the paper:[architecture]:https://github.com/Moeinh77/Lightweight-Deep-Convolutional-Network-for-Tiny-Object-Recognitio

Keras Computing the False Positive Rate (FPR) and True Positive Rate (TPR) in a CNN

I am designing a CNN for classification two types of images, and I need to compute the FPR and TPR. In the following, you can see my code, but I don't know how can I compute FPR and TPR based on this code. Could you please let me know how can I do that. I know for computing the FPR and TPR I can use the following code fpr, tpr, thresholds = metrics.roc_curve(y_test, y_predic) while y_predict can be computed by y_predic = model.predict(x_test) but in my code, I don't know how can do that.

Keras Top k categorical accuracy for Time Distributed LSTM results

I'm trying to evaluate the results of an LSTM using top_k_categorical_accuracy. For each One-Hot encoded token, I try to predict the next token. In order to do this I take the output for each instance in the sequence by using the TimeDistributed layer wrapper, and pass it to a Dense layer to re-encode the results into the same One-Hot encoding. While using the built in accuracy metric metrics=['accuracy'] works without a hitch, using top_k_categorical_accuracy fails, giving me the error messag

Keras The embeddings using ** layers[0].get_weights()[0]**

I use an example to study embedding Networks, where a put the vocabulary size = 200 and the training sample contain about 20 different words. the vocab size is 200 that means that the number of words is 200. But effectively I'm working with 20 words only ( the words of my training sample) : let say word[0] to word[19]. So, after the embedding, the vector[0] corresponds to word[0] and so on. but vector[20].. vector [30] … what do they match ? I have no word[20] or word[30] . Thanks in advance.

The added layer must be an instance of class Layer. Found: <keras.layers.convolutional.Conv3D object at 0x000001D009782400>

TypeError Traceback (most recent call last) in 1 model= tf.keras.models.Sequential() ----> 2 model.add(Conv3D(64 ,kernel_size =(3,3,3) ,strides = (1,1,1), padding = 'same',input_shape=(input_shape) ,activation = 'relu')) 3 model.add(Conv3D(64 , kernel_size = (3,3,3) ,strides = (1,1,1) , padding = 'same' , activation = 'relu')) 4 model.add(MaxPooling3D(pooling_size = (2,2,2) , strides= (2,2,2))) 5 model.add(Conv3D(128 , kernel_size = (3,3,

Keras seq2seq model how to mask padding zeros from validation when training?

I am working on a project based on this great tutorial. https://machinelearningmastery.com/develop-encoder-decoder-model-sequence-sequence-prediction-keras/ I have had to pad the end of my input and output sequences with zeros to keep them the same length, e.g. [72 1 62 0 68 4 72 0 63 0 68 5 83 3 87 1 86 1 84 3 86 13 74 0 71 2 87 5 90 3 63 0 66 0 76 2 36 1 38 1 67 0 34 0 61 4 89 4 62 0 40 0 63 0 31 1 39 5 88 4 68 0 68 0 72 3 71 0 78 3 67 1 66 0 64 5 63

Keras Can't visualize the dropout layer in model summary using this method

Model: "model_9" Layer (type) Output Shape Param # inpt_mlp (InputLayer) (None, 189) 0 hidden_1_mlp (Dense) (None, 256) 48640 hidden_2_mlp (Dense) (None, 128) 32896 out_mlp (Dense) (None, 10554) 1361466 Total params: 1,443,002 Trainable params: 1,443,002 Non-trainable params: 0 As seen above the "drop out" layer is not seen in model summary.

Keras If I have fewer data points for training a stateful or stateless lstm, will smaller batch size or large epoch work

I have about 1000 data points ( weekly sales for the past 5 years for 4 products) and some 10 independent feature for each data point. I want to train lstm on this time series data to predict the next week sales or next 10 week sales. Since the data available to train is low. Can decreasing the batch size or increasing epoch size help with converge and kind of compensate ( to some extent atleast) for the smaller data size.

  1    2   3   4   5   6  ... 下一页 最后一页 共 12 页