ProductPromotion
Logo

Swift

made by https://0x3d.site

GitHub - palle-k/DL4S: Accelerated tensor operations and dynamic neural networks based on reverse mode automatic differentiation for every device that can run Swift - from watchOS to Linux
Accelerated tensor operations and dynamic neural networks based on reverse mode automatic differentiation for every device that can run Swift - from watchOS to Linux - palle-k/DL4S
Visit Site

GitHub - palle-k/DL4S: Accelerated tensor operations and dynamic neural networks based on reverse mode automatic differentiation for every device that can run Swift - from watchOS to Linux

GitHub - palle-k/DL4S: Accelerated tensor operations and dynamic neural networks based on reverse mode automatic differentiation for every device that can run Swift - from watchOS to Linux

DL4S provides a high-level API for many accelerated operations common in neural networks and deep learning. It furthermore has automatic differentiation builtin, which allows you to create and train neural networks without needing to manually implement backpropagation - without needing a special Swift toolchain.

Features include implementations for many basic binary and unary operators, broadcasting, matrix operations, convolutional and recurrent neural networks, commonly used optimizers, second derivatives and much more. DL4S provides implementations for common network architectures, such as VGG, AlexNet, ResNet and Transformers.

While its primary purpose is deep learning and optimization, DL4S can be used as a library for vectorized mathematical operations like numpy.

Read the full documentation

Overview

  1. Installation
  2. Features
    1. Layers
    2. Optimizers
    3. Losses
    4. Tensor Operations
    5. Engines
    6. Architectures
  3. Examples

Installation

iOS / tvOS / macOS

  1. In Xcode, select "File" > "Swift Packages" > "Add Package Dependency"
  2. Enter https://github.com/palle-k/DL4S.git into the Package URL field and click "Next".
  3. Select "Branch", "master" and click "Next".
  4. Enable the Package Product DL4S, your app in the "Add to Target" column and click "Next".

Note: Installation via CocoaPods is no longer supported for newer versions.

Swift Package

Add the dependency to your Package.swift file:

.package(url: "https://github.com/palle-k/DL4S.git", .branch("master"))

Then add DL4S as a dependency to your target:

.target(name: "MyPackage", dependencies: ["DL4S"])

MKL / IPP / OpenMP Support

DL4S can be accelerated with Intel's Math Kernel Library, Integrated Performance Primitives and OpenMP (Installation Instructions).

On Apple devices, DL4S uses vectorized functions provided by the builtin Accelerate framework by default. If no acceleration library is available, a fallback implementation is used.

Compiling with MKL/IPP:

# After adding the APT repository as described in the installation instructions
sudo apt-get install intel-mkl-64bit-2019.5-075 intel-ipp-64bit-2019.5-075 libiomp-dev

export MKLROOT=/opt/intel/mkl
export IPPROOT=/opt/intel/ipp
export LD_LIBRARY_PATH=${MKLROOT}/lib/intel64:${IPPROOT}/lib/intel64:${LD_LIBRARY_PATH}

swift build -c release \
    -Xswiftc -DMKL_ENABLE \
    -Xlinker -L${MKLROOT}/lib/intel64 \
    -Xlinker -L${IPPROOT}/lib/intel64

TensorBoard Support

DL4S-Tensorboard provides a summary writer that can write tensorboard compatible logs.

LLDB Extension

DL4S includes a LLDB python script that provides custom descriptions for Tensors (util/debugger_support/tensor.py).

To use enhanced summaries, execute command script import /path/to/DL4S/util/debugger_support/tensor.py either directly in LLDB or add the command to your ~/.lldbinit file.

Then you can use the print or frame variable commands to print human-readable descriptions of tensors.

Features

Core:

  • Convolution
  • Transposed Convolution
  • Dense/Linear/Fully Connected
  • LSTM
  • Gated Recurrent Unit (GRU)
  • Vanilla RNN
  • Embedding
  • Multi-head Attention
  • Transformer Block

Pooling:

  • Max Pooling
  • Average Pooling
  • Adaptive Max Pooling
  • Adaptive Average Pooling

Norm:

  • Batch Norm
  • Layer Norm

Utility:

  • Bidirectional RNNs
  • Sequential
  • Lambda
  • Dropout
  • Lambda

Activation:

  • Relu
  • LeakyRelu
  • Gelu
  • Tanh
  • Sigmoid
  • Softmax
  • Log Softmax
  • Dropout
  • Gelu
  • Swish
  • Mish
  • LiSHT

Transformer:

  • Positional Encoding
  • Scaled Dot Product Attention
  • Multihead Attention
  • Pointwise Feed Forward
  • Transformer Encoder Block
  • Transformer Decoder Block
  • SGD
  • Momentum
  • Adam
  • AMSGrad
  • AdaGrad
  • AdaDelta
  • RMSProp
  • Binary Cross-Entropy
  • Categorical Cross-Entropy
  • Negative Log Likelihood (NLL Loss)
  • MSE
  • L1 & L2 regularization

Behavior of broadcast operations is consistent with numpy rules.

  • broadcast-add
  • broadcast-sub
  • broadcast-mul
  • broadcast-div
  • matmul
  • neg
  • exp
  • pow
  • log
  • sqrt
  • sin
  • cos
  • tan
  • tanh
  • sum
  • max
  • relu
  • leaky relu
  • gelu
  • elu
  • elementwise min
  • elementwise max
  • reduce sum
  • reduce max
  • scatter
  • gather
  • conv2d
  • transposed conv2d
  • max pool
  • avg pool
  • subscript
  • subscript range
  • transpose
  • axis permute
  • reverse
  • im2col
  • col2im
  • stack / concat
  • swish activation
  • mish activation
  • lisht activation
  • diagonal matrix generation
  • diagonal extraction
  • band matrix generation
  • CPU (Accelerate framework for Apple Devices)
  • CPU (Intel Math Kernel Library and Integrated Performance Primitives)
  • CPU (Generic)
  • GPU (ArrayFire: OpenCL, CUDA)

For an experimental, early stage GPU accelerated version, check out feature/arrayfire.

Default implementations are provided for the following architectures:

  • ResNet18
  • VGG (11, 13, 16, 19)
  • AlexNet
  • Transformer

Examples

Some high level examples have been implemented in other repositories:

Arithmetic & Differentiation

DL4S provides a high-level interface to many vectorized operations on tensors.

let a = Tensor<Float, CPU>([[1,2],[3,4],[5,6]], requiresGradient: true)
let prod = a.transposed().matrixMultipled(with: a)
let s = prod.reduceSum()
let l = log(s)
print(l) // 5.1873856

When a tensor is marked to require a gradient, a compute graph will be captured. The graph stores all operations, which use that tensor directly or indirectly as an operand.

It is then possible to backpropagate through that graph using the gradients(of:) function:

// Backpropagate
let dl_da = l.gradients(of: [a])[0]

print(dl_da)
/*
[[0.034, 0.034]
 [0.078, 0.078]
 [0.123, 0.123]]
*/

Second derivatives

The operations used during backpropagation are themselves differentiable. Therefore, second derivatives can be computed by computing the gradient of the gradient.

When higher order derivatives are required, the compute graph of the backwards pass has to be explicitly retained.

let t = Tensor<Float, CPU>([1,2,3,4], requiresGradient: true)

let result = t * t * t
print(result) // [1, 8, 27, 64]

let grad = result.gradients(of: [t], retainBackwardsGraph: true)[0]
print(grad) // [3, 12, 27, 48]

let secondGrad = grad.gradients(of: [t], retainBackwardsGraph: true)[0]
print(secondGrad) // [6, 12, 18, 24]

let thirdGrad = secondGrad.gradients(of: [t])[0]
print(thirdGrad) // [6, 6, 6, 6]

Convolutional Networks

Example for MNIST classification

// Input must be batchSizex1x28x28
var model = Sequential {
   Convolution2D<Float, CPU>(inputChannels: 1, outputChannels: 6, kernelSize: (5, 5))
   Relu<Float, CPU>()
   MaxPool2D<Float, CPU>(windowSize: 2, stride: 2)
   
   Convolution2D<Float, CPU>(inputChannels: 6, outputChannels: 16, kernelSize: (5, 5))
   Relu<Float, CPU>()
   MaxPool2D<Float, CPU>(windowSize: 2, stride: 2)
   
   Flatten<Float, CPU>()
   
   Dense<Float, CPU>(inputSize: 256, outputSize: 120)
   Relu<Float, CPU>()
   
   Dense<Float, CPU>(inputSize: 120, outputSize: 10)
   LogSoftmax<Float, CPU>()
}

var optimizer = Adam(model: model, learningRate: 0.001)

// Single iteration of minibatch gradient descent
let batch: Tensor<Float, CPU> = ... // shape: [batchSize, 1, 28, 28]
let y_true: Tensor<Int32, CPU> = ... // shape: [batchSize]

// use optimizer.model, not model
let pred = optimizer.model(batch)
let loss = categoricalNegativeLogLikelihood(expected: y_true, actual: pred)

let gradients = loss.gradients(of: optimizer.model.parameters)
optimizer.update(along: gradients)

Recurrent Networks

Example for MNIST classification

The Gated Reccurent Unit scans the image from top to bottom and uses the final hidden state for classification.

let model = Sequential {
    GRU<Float, CPU>(inputSize: 28, hiddenSize: 128, direction: .forward)
    Lambda<GRU<Float, CPU>.Outputs, Tensor<Float, CPU>, Float, CPU> { inputs in
        inputs.0
    }
    Dense<Float, CPU>(inputSize: 128, outputSize: 10)
    LogSoftmax<Float, CPU>()
}

var optimizer = Adam(model: model, learningRate: 0.001)

let batch: Tensor<Float, CPU> = ... // shape: [batchSize, 28, 28]
let y_true: Tensor<Int32, CPU> = ... // shape: [batchSize]

let x = batch.permuted(to: 1, 0, 2) // Swap first and second axis
let pred = optimizer.model(x)
let loss = categoricalNegativeLogLikelihood(expected: y_true, actual: pred)

let gradients = loss.gradients(of: optimizer.model.parameters)
optimizer.update(along: gradients)

More Resources
to explore the angular.

mail [email protected] to add your project or resources here 🔥.

Related Articles
to learn about angular.

FAQ's
to learn more about Angular JS.

mail [email protected] to add more queries here 🔍.

More Sites
to check out once you're finished browsing here.

0x3d
https://www.0x3d.site/
0x3d is designed for aggregating information.
NodeJS
https://nodejs.0x3d.site/
NodeJS Online Directory
Cross Platform
https://cross-platform.0x3d.site/
Cross Platform Online Directory
Open Source
https://open-source.0x3d.site/
Open Source Online Directory
Analytics
https://analytics.0x3d.site/
Analytics Online Directory
JavaScript
https://javascript.0x3d.site/
JavaScript Online Directory
GoLang
https://golang.0x3d.site/
GoLang Online Directory
Python
https://python.0x3d.site/
Python Online Directory
Swift
https://swift.0x3d.site/
Swift Online Directory
Rust
https://rust.0x3d.site/
Rust Online Directory
Scala
https://scala.0x3d.site/
Scala Online Directory
Ruby
https://ruby.0x3d.site/
Ruby Online Directory
Clojure
https://clojure.0x3d.site/
Clojure Online Directory
Elixir
https://elixir.0x3d.site/
Elixir Online Directory
Elm
https://elm.0x3d.site/
Elm Online Directory
Lua
https://lua.0x3d.site/
Lua Online Directory
C Programming
https://c-programming.0x3d.site/
C Programming Online Directory
C++ Programming
https://cpp-programming.0x3d.site/
C++ Programming Online Directory
R Programming
https://r-programming.0x3d.site/
R Programming Online Directory
Perl
https://perl.0x3d.site/
Perl Online Directory
Java
https://java.0x3d.site/
Java Online Directory
Kotlin
https://kotlin.0x3d.site/
Kotlin Online Directory
PHP
https://php.0x3d.site/
PHP Online Directory
React JS
https://react.0x3d.site/
React JS Online Directory
Angular
https://angular.0x3d.site/
Angular JS Online Directory