I was first exposed to deep learning over a decade ago, while helping a professor with a deep learning tutorial he was putting together. It was interesting, but I gave up on understanding what was going on pretty early. For starters, there was a lot of math, and a series of unfortunate (academic) events meant that nearly all my knowledge of linear algebra and vector calculus was nonexistent.
Not to mention, everything was in MATLAB! Ugh. To those of you who have mastered that language, more power to you. I’ll probably never be able to get past the one-indexing myself.
And so, I dropped all my nascent efforts to really learn deep learning beyond just knowing a few high level basics, and ignored pretty much everything in that field for the next decade! It’s been a little awkward, because that involvement in a deep learning tutorial seems to make a lot of people assume I’m an expert, whoops. But I’m done ignoring the field. It’s not DALLE or ChatGPT that’s reignited my interest. It’s Andrej Karpathy’s ridiculously awesome 2+ video about backpropagation, which contains absolutely zero vector calculus and nothing more complicated than basic derivative math, that’s done the trick. I friggin’ binge watched that thing like it was a kdrama, haha. People have told me for years that the concepts behind deep learning are “so easy” once you wade past the math, etc. etc., but this is definitely one of those instances where you have to see it to believe it!
Probably nobody is reading this, but if you are, thanks for visiting, and feel free to follow along my weird journey to properly learn deep learning, with the initial goal of being able to finally add “Pytorch” to my resume!