The Alpha Version of Kotlin DL Has Been Released

The Alpha Version of Kotlin DL Has Been Released

JetBrains has released an alpha version of the framework for deep learning of neural networks on Kotlin. KotlinDL provides simple APIs for describing and training neural networks. The developers hope to lower the entry threshold for deep learning in the Java Virtual Machine by using a high-level API and carefully selected default values for a variety of parameters.

What is Included in the API?

In the early version of KotlinDL, developers will find all the necessary methods for describing multilayer perceptrons and convolutional networks. Most of the parameters are set to reasonable default values. But at the same time, users are provided with a wide range of optimizers, initializers, activation functions, and other settings. The model obtained during the training process can be saved and used in applications written in Kotlin and Java.

Loading Models Trained on Keras

Due to the similarity of the API in KotlinDL, you can load and use models trained with Keras in Python. When loading, you can use the Transfer Learning technique, which allows you not to train a neural network from scratch, but to use a ready-made model, adapting it to your task.

KotlinDL uses the TensorFlow Java API as its engine. All calculations are performed in the TensorFlow machine learning library, in native memory. During training, all data remains in the native format.

Temporary Limitations

In the alpha version of KotlinDL, a limited number of layers are available: Input (), Flatten (), Dense (), Dropout (), Conv2D (), MaxPool2D (), and AvgPool2D(). This restriction also affects which Keras models can be loaded into the framework. VGG-16 and VGG-19 architectures are already supported, but ResNet50 is not yet. In the coming months, it is planned to release a minor update, in which the number of supported architectures will increase. The second time limit is the lack of support for Android devices.

GPU Support

Training models on the CPU can take a considerable amount of time. A common practice is to run computations on the GPU. To do this, you will need an installed CUDA from NVIDIA. To start training the model on the GPU, it is enough to add just one dependency.

Recommended Articles

Share
Tweet
Pin
Share
Share