diff --git a/README.md b/README.md index c4ad0ef..8547c16 100644 --- a/README.md +++ b/README.md @@ -155,9 +155,7 @@ The result will be the model's predictions for the entire dataset. ## What's in the Works? ML++, like most frameworks, is dynamic, and constantly changing! This is especially important in the world of ML, as new algorithms and techniques are being developed day by day. Here a couple things currently being developed for ML++: -

- Convolutional Neural Networks -

- Kernels for SVMs

@@ -166,4 +164,4 @@ ML++, like most frameworks, is dynamic, and constantly changing! This is especia

## Citations -Various different materials helped me along the way of creating ML++, and I would like to give credit to them here. [This](https://www.tutorialspoint.com/cplusplus-program-to-compute-determinant-of-a-matrix) article by TutorialsPoint was a big help when trying to implement the determinant of a matrix, and [this](https://www.geeksforgeeks.org/adjoint-inverse-matrix/) website by GeeksForGeeks was very helpful when trying to take the adjoint and inverse of a matrix. Lastly, I would like to thank [this](https://towardsdatascience.com/svm-implementation-from-scratch-python-2db2fc52e5c2) article from Towards Data Science which helped illustrate a practical definition of the Hinge Loss activation function and its gradient. +Various different materials helped me along the way of creating ML++, and I would like to give credit to them here. [This](https://www.tutorialspoint.com/cplusplus-program-to-compute-determinant-of-a-matrix) article by TutorialsPoint was a big help when trying to implement the determinant of a matrix, and [this](https://www.geeksforgeeks.org/adjoint-inverse-matrix/) website by GeeksForGeeks was very helpful when trying to take the adjoint and inverse of a matrix. Lastly, I would like to thank [this](https://towardsdatascience.com/svm-implementation-from-scratch-python-2db2fc52e5c2) article from Towards Data Science which helped illustrate a practical definition of the Hinge Loss activation function and its gradient when optimizing with SGD.