From f53f2c2e4a9579411b047414b9b65f7611cc20d4 Mon Sep 17 00:00:00 2001 From: marc <78002988+novak-99@users.noreply.github.com> Date: Fri, 24 Sep 2021 16:45:10 -0700 Subject: [PATCH 01/10] Update README.md --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 82e0958..f09084a 100644 --- a/README.md +++ b/README.md @@ -76,6 +76,7 @@ The result will be the model's predictions for the entire dataset. - Gaussian CDF - RELU - GELU + - Sign - Unit Step - Sinh - Cosh @@ -132,6 +133,7 @@ The result will be the model's predictions for the entire dataset. 1. Multinomial Naive Bayes 2. Bernoulli Naive Bayes 3. Gaussian Naive Bayes +8. ***Support Vector Classification*** 8. ***K-Means*** 9. ***k-Nearest Neighbors*** 10. ***Outlier Finder (Using z-scores)*** From 28322c86843c78128c810c795a50e413b67e1a27 Mon Sep 17 00:00:00 2001 From: marc <78002988+novak-99@users.noreply.github.com> Date: Fri, 24 Sep 2021 17:00:59 -0700 Subject: [PATCH 02/10] Update README.md --- README.md | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/README.md b/README.md index f09084a..977b913 100644 --- a/README.md +++ b/README.md @@ -151,3 +151,13 @@ The result will be the model's predictions for the entire dataset. 3. Recall 4. Accuracy 5. F1 score + + +## What's in the Works? +ML++, like most frameworks, is dynamic, and constantly changing! This is especially important in the world of ML, as new algorithms and techniques are being developed day by day. Here a couple things currently being developed for ML++: + - Convolutional Neural Networks + - Kernels for SVMs + - Support Vector Regression + +## Citations +Various different materials helped me along the way of creating ML++, and I would like to give credit to them here. [This](https://www.tutorialspoint.com/cplusplus-program-to-compute-determinant-of-a-matrix) article by TutorialsPoint was a big help when trying to implement the determinant of a matrix, and [this](https://www.geeksforgeeks.org/adjoint-inverse-matrix/) website by GeeksForGeeks was very helpful when trying to take the adjoint and inverse of a matrix. Lastly, I would like to thank [this](https://towardsdatascience.com/svm-implementation-from-scratch-python-2db2fc52e5c2) article from Towards Data Science which helped illustrate a practical definition of the Hinge Loss activation function and its gradient. From 84532086fcb36225900be9151cc506f67e410bd5 Mon Sep 17 00:00:00 2001 From: marc <78002988+novak-99@users.noreply.github.com> Date: Fri, 24 Sep 2021 17:01:52 -0700 Subject: [PATCH 03/10] Update README.md --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 977b913..58c5072 100644 --- a/README.md +++ b/README.md @@ -155,9 +155,9 @@ The result will be the model's predictions for the entire dataset. ## What's in the Works? ML++, like most frameworks, is dynamic, and constantly changing! This is especially important in the world of ML, as new algorithms and techniques are being developed day by day. Here a couple things currently being developed for ML++: - - Convolutional Neural Networks - - Kernels for SVMs - - Support Vector Regression + 1. Convolutional Neural Networks + 2. Kernels for SVMs + 3. Support Vector Regression ## Citations Various different materials helped me along the way of creating ML++, and I would like to give credit to them here. [This](https://www.tutorialspoint.com/cplusplus-program-to-compute-determinant-of-a-matrix) article by TutorialsPoint was a big help when trying to implement the determinant of a matrix, and [this](https://www.geeksforgeeks.org/adjoint-inverse-matrix/) website by GeeksForGeeks was very helpful when trying to take the adjoint and inverse of a matrix. Lastly, I would like to thank [this](https://towardsdatascience.com/svm-implementation-from-scratch-python-2db2fc52e5c2) article from Towards Data Science which helped illustrate a practical definition of the Hinge Loss activation function and its gradient. From cafec0fb2255b7f4429cb1c83922099757dc778a Mon Sep 17 00:00:00 2001 From: marc <78002988+novak-99@users.noreply.github.com> Date: Fri, 24 Sep 2021 17:02:38 -0700 Subject: [PATCH 04/10] Update README.md --- README.md | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 58c5072..53d57bb 100644 --- a/README.md +++ b/README.md @@ -155,9 +155,15 @@ The result will be the model's predictions for the entire dataset. ## What's in the Works? ML++, like most frameworks, is dynamic, and constantly changing! This is especially important in the world of ML, as new algorithms and techniques are being developed day by day. Here a couple things currently being developed for ML++: - 1. Convolutional Neural Networks - 2. Kernels for SVMs +
+ - Convolutional Neural Networks +
++ - Kernels for SVMs +
+3. Support Vector Regression +
## Citations Various different materials helped me along the way of creating ML++, and I would like to give credit to them here. [This](https://www.tutorialspoint.com/cplusplus-program-to-compute-determinant-of-a-matrix) article by TutorialsPoint was a big help when trying to implement the determinant of a matrix, and [this](https://www.geeksforgeeks.org/adjoint-inverse-matrix/) website by GeeksForGeeks was very helpful when trying to take the adjoint and inverse of a matrix. Lastly, I would like to thank [this](https://towardsdatascience.com/svm-implementation-from-scratch-python-2db2fc52e5c2) article from Towards Data Science which helped illustrate a practical definition of the Hinge Loss activation function and its gradient. From aff00ce4a12b77a1a814f3c0f26bb62aff575616 Mon Sep 17 00:00:00 2001 From: marc <78002988+novak-99@users.noreply.github.com> Date: Fri, 24 Sep 2021 17:03:01 -0700 Subject: [PATCH 05/10] Update README.md --- README.md | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index 53d57bb..37cc17f 100644 --- a/README.md +++ b/README.md @@ -156,13 +156,9 @@ The result will be the model's predictions for the entire dataset. ## What's in the Works? ML++, like most frameworks, is dynamic, and constantly changing! This is especially important in the world of ML, as new algorithms and techniques are being developed day by day. Here a couple things currently being developed for ML++:- - Convolutional Neural Networks -
-+ - Convolutional Neural Networks - Kernels for SVMs -
-- 3. Support Vector Regression + - Support Vector Regression
## Citations From 836850d92e8f619b778298142f31bbb643b207a7 Mon Sep 17 00:00:00 2001 From: marc <78002988+novak-99@users.noreply.github.com> Date: Fri, 24 Sep 2021 17:03:04 -0700 Subject: [PATCH 06/10] Create README.md From 370550ec2df3593471aa690d5b021e7432289bb2 Mon Sep 17 00:00:00 2001 From: marc <78002988+novak-99@users.noreply.github.com> Date: Fri, 24 Sep 2021 17:03:59 -0700 Subject: [PATCH 07/10] Update README.md --- README.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/README.md b/README.md index 37cc17f..c4ad0ef 100644 --- a/README.md +++ b/README.md @@ -157,7 +157,11 @@ The result will be the model's predictions for the entire dataset. ML++, like most frameworks, is dynamic, and constantly changing! This is especially important in the world of ML, as new algorithms and techniques are being developed day by day. Here a couple things currently being developed for ML++:- Convolutional Neural Networks +
+- Kernels for SVMs +
+- Support Vector Regression
From 86bf4eb4047289425e3a397c9f71ce71b87fbd35 Mon Sep 17 00:00:00 2001 From: marc <78002988+novak-99@users.noreply.github.com> Date: Fri, 24 Sep 2021 17:05:03 -0700 Subject: [PATCH 08/10] Update README.md --- README.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/README.md b/README.md index c4ad0ef..8547c16 100644 --- a/README.md +++ b/README.md @@ -155,9 +155,7 @@ The result will be the model's predictions for the entire dataset. ## What's in the Works? ML++, like most frameworks, is dynamic, and constantly changing! This is especially important in the world of ML, as new algorithms and techniques are being developed day by day. Here a couple things currently being developed for ML++: -- Convolutional Neural Networks -
- Kernels for SVMs
@@ -166,4 +164,4 @@ ML++, like most frameworks, is dynamic, and constantly changing! This is especia ## Citations -Various different materials helped me along the way of creating ML++, and I would like to give credit to them here. [This](https://www.tutorialspoint.com/cplusplus-program-to-compute-determinant-of-a-matrix) article by TutorialsPoint was a big help when trying to implement the determinant of a matrix, and [this](https://www.geeksforgeeks.org/adjoint-inverse-matrix/) website by GeeksForGeeks was very helpful when trying to take the adjoint and inverse of a matrix. Lastly, I would like to thank [this](https://towardsdatascience.com/svm-implementation-from-scratch-python-2db2fc52e5c2) article from Towards Data Science which helped illustrate a practical definition of the Hinge Loss activation function and its gradient. +Various different materials helped me along the way of creating ML++, and I would like to give credit to them here. [This](https://www.tutorialspoint.com/cplusplus-program-to-compute-determinant-of-a-matrix) article by TutorialsPoint was a big help when trying to implement the determinant of a matrix, and [this](https://www.geeksforgeeks.org/adjoint-inverse-matrix/) website by GeeksForGeeks was very helpful when trying to take the adjoint and inverse of a matrix. Lastly, I would like to thank [this](https://towardsdatascience.com/svm-implementation-from-scratch-python-2db2fc52e5c2) article from Towards Data Science which helped illustrate a practical definition of the Hinge Loss activation function and its gradient when optimizing with SGD. From 70d9f0e5c5f80413d37583bcb89337a86d83b1c4 Mon Sep 17 00:00:00 2001 From: marc <78002988+novak-99@users.noreply.github.com> Date: Fri, 24 Sep 2021 17:05:33 -0700 Subject: [PATCH 09/10] Update README.md --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 8547c16..1b21f34 100644 --- a/README.md +++ b/README.md @@ -155,7 +155,9 @@ The result will be the model's predictions for the entire dataset. ## What's in the Works? ML++, like most frameworks, is dynamic, and constantly changing! This is especially important in the world of ML, as new algorithms and techniques are being developed day by day. Here a couple things currently being developed for ML++: +- Convolutional Neural Networks +
- Kernels for SVMs
From 523bfe074ad531d0b88a702300cc55518bf88018 Mon Sep 17 00:00:00 2001 From: marc <78002988+novak-99@users.noreply.github.com> Date: Fri, 24 Sep 2021 17:06:43 -0700 Subject: [PATCH 10/10] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 1b21f34..582448b 100644 --- a/README.md +++ b/README.md @@ -166,4 +166,4 @@ ML++, like most frameworks, is dynamic, and constantly changing! This is especia ## Citations -Various different materials helped me along the way of creating ML++, and I would like to give credit to them here. [This](https://www.tutorialspoint.com/cplusplus-program-to-compute-determinant-of-a-matrix) article by TutorialsPoint was a big help when trying to implement the determinant of a matrix, and [this](https://www.geeksforgeeks.org/adjoint-inverse-matrix/) website by GeeksForGeeks was very helpful when trying to take the adjoint and inverse of a matrix. Lastly, I would like to thank [this](https://towardsdatascience.com/svm-implementation-from-scratch-python-2db2fc52e5c2) article from Towards Data Science which helped illustrate a practical definition of the Hinge Loss activation function and its gradient when optimizing with SGD. +Various different materials helped me along the way of creating ML++, and I would like to give credit to them here. [This](https://www.tutorialspoint.com/cplusplus-program-to-compute-determinant-of-a-matrix) article by TutorialsPoint was a big help when trying to implement the determinant of a matrix, and [this](https://www.geeksforgeeks.org/adjoint-inverse-matrix/) article by GeeksForGeeks was very helpful when trying to take the adjoint and inverse of a matrix. Lastly, I would like to thank [this](https://towardsdatascience.com/svm-implementation-from-scratch-python-2db2fc52e5c2) article by Towards Data Science which helped illustrate a practical definition of the Hinge Loss activation function and its gradient when optimizing with SGD.