Read An in-depth look at Core ML 3 and accidentally found that Core ML 3 brought SoundAnalysisPreprocessing layer — this is for Create ML’s new sound classification model. It takes audio samples and converts them to mel spectrograms. This can be used in a Pipeline as the input to an audio feature extraction model (typically a neural network). I have been working the last month exactly on this. Unfortunately sizes are fixed and limited for task I am working on.

Jumped to post Custom Layers in Core ML which explains in details how to build custom layer implementation for CPU (plain Swift implementation with loops and vectorized 150 times faster one with Accelerate framework) and for GPU (with metal shader).

Browsed MobileNetV2 + SSDLite with Core ML. Interesting reading. A lot of comparisons SSD vs. YOLO. I should repeat all this steps myself.

Browsed A peek inside Core ML. If whenever I would need to examine Core ML file I would go to this post.

First time tried to compile CoreML model on device. Interesting that console output matches one of xcode.

Read Building a Neural Style Transfer app on iOS with PyTorch and CoreML. People don’t shy to wrap ready style transfer neural nets into paid apps.

Re-watched videos of week 2 of course Audio Signal Processing for Music Applications, accomplished quiz and assignment.