September Monthly Goal Epilogue: Neural Networks

And with that, I’ve completed the specialization for Neural networks. In total, I felt like I learned a lot, and yet, very little.

What I did learn:
-the basic keras toolkit and how to build neural networks with keras

-Different setups of neural networks (RNNs, Sequence Models, Attention Learning, convolutional neural networks)

-Most of your compile bugs in deep learning happen because your matrix dimensions are wrong

-Variance and bias tradeoffs (Variance is solved with more data, bias is solved with deeper neural architecture)

-On the day-to-day, machine learning isn’t about implementing algorithms, and more about tuning data and hyperparameters, and choosing the correct algorithm given the priorities of the project

 

What I didn’t learn:

-A deeply granular understanding of the math behind neural networks. While I feel like I understand the intuition behind gradient descent, forward propagation, backpropagation, etc., there was still a lot of concepts that went over my head.

-A more in-depth discovery on how deep learning will change the world. I’m on the side that it will significantly make an impact across industries, but the ‘how’ I am more confused about. Knowing that computers can now beat world champions at Go is cool, but lacks the scale I want to know more about.

Overall, this was a fun foray to get into, and was definitely enjoyable. Whether I implement some deep learning stuff in the future is yet to be decided.

Leave a comment

Your email address will not be published. Required fields are marked *