It’s such bullshit too because drastically changing someone’s working conditions is clearly a constructive dismissal and should lead to severance payments.
The thing with AI is, what the term today refers to most often is neural networks, which are really advanced statistics. And the thing is, to get more precise statistics, you need exponentially more data. And of course the marginal utility decays exponentially. So exponentially increasing marginal expenses meet exponentially decaying marginal utility.
Friend, your brain is also just a neural network. “Advanced statistics” are happening in your head every second. There is nothing exceptional about humans, save for the immense complexity of our neural network.
AI is a very broad term that also includes expert systems (such as Computational Fluid Dynamics, Finite Element Analysis, etc approaches.). Traditional machine learning approaches (like support vector machines, etc.) too. But yes, I agree—most commonly associated with deep learning/neural network approaches.
That said, it’s misleading and inaccurate to state that neural networks are just statistics. In fact they are substantially more than just advanced statistics. Certainly statistics is a component—but so too is probability, calculus, network/graph theory, linear algebra, not to mention computer science to program, tune, and train and infer them. Information theory (hello, entropy) plays a part sometimes.
The amount of mathematical background it takes to really understand and practice the theory of both a forward pass and backpropagation is an entire undergraduate STEM curriculum’s worth. I usually advocate for new engineers in my org to learn it top down (by doing) and pull the theory as needed, but that’s not how I did it and I regularly see gaps in their decisions because of it.
And to get actually good at it? One does not simply become a AI systems engineer/technologist. It’s years of tinkering with computers and operating systems, sourcing/scraping/querying/curating data, building data pipelines, cleaning data, engineering types of modeling approaches for various data types and desired outcomes against constraints (data, compute, economic, social/political), implementing POCs, finetuning models, mastering accelerated computing (aka GPUs, TPUs), distributed computation—and many others I’m sure I’m forgetting some here. The number of adjacent fields I’ve had to deeply scratch on to make any of this happen is stressful just thinking about it.
They’re fascinating machines, and they’ve been democratized/abstracted to an extent where it’s now as simple as import torch, torch.fit, model.predict. But to be dismissive of the amazing mathematics and engineering under the hood to make them actually usable is disingenuous.
I admit I have a bias here—I’ve spent the majority of my career building and deploying NN models.
That said, it’s misleading and inaccurate to state that neural networks are just statistics. In fact they are substantially more than just advanced statistics. Certainly statistics is a component—but so too is probability, calculus, network/graph theory, linear algebra, not to mention computer science to program, tune, and train and infer them. Information theory (hello, entropy) plays a part sometimes.
What I meant when I said that they are advanced statistics is that that is what they do. I know that a lot of disciplines play a part in creating them. I know it’s incredible complicated, it took me quite a while to wrap my head around what the back-propagation algorithm.
I also know that neural networks can do some really cool stuff. Recognizing tumors, for example. But it’s equally dangerous to overestimate them, so we have to be aware of their limitations.
Edit: All that being said, I do recognize that you have spent much more time learning about and working with neural networks than I have.
This just means they’re a struggling company who needs to cut headcount and want to do it without paying severance
Considering this company was founded as a remote work company from the beginning, you’re absolutely right.
It’s such bullshit too because drastically changing someone’s working conditions is clearly a constructive dismissal and should lead to severance payments.
In addition, this tactic will result in the best employees leaving first, because they’ll get employed somewhere else.
Cue the pivot to some ridiculous buzz tech like AI in the near future, then being acquired and promptly abandoned by some big corp.
The thing with AI is, what the term today refers to most often is neural networks, which are really advanced statistics. And the thing is, to get more precise statistics, you need exponentially more data. And of course the marginal utility decays exponentially. So exponentially increasing marginal expenses meet exponentially decaying marginal utility.
Friend, your brain is also just a neural network. “Advanced statistics” are happening in your head every second. There is nothing exceptional about humans, save for the immense complexity of our neural network.
Just to be clear, I am in love with statistics and especially generative algos, and have written papers on it before ChatGPT was a thing.
I just hate that one company made a chatbot with it and now the whole world is cargo culting around it.
AI is a very broad term that also includes expert systems (such as Computational Fluid Dynamics, Finite Element Analysis, etc approaches.). Traditional machine learning approaches (like support vector machines, etc.) too. But yes, I agree—most commonly associated with deep learning/neural network approaches.
That said, it’s misleading and inaccurate to state that neural networks are just statistics. In fact they are substantially more than just advanced statistics. Certainly statistics is a component—but so too is probability, calculus, network/graph theory, linear algebra, not to mention computer science to program, tune, and train and infer them. Information theory (hello, entropy) plays a part sometimes.
The amount of mathematical background it takes to really understand and practice the theory of both a forward pass and backpropagation is an entire undergraduate STEM curriculum’s worth. I usually advocate for new engineers in my org to learn it top down (by doing) and pull the theory as needed, but that’s not how I did it and I regularly see gaps in their decisions because of it.
And to get actually good at it? One does not simply become a AI systems engineer/technologist. It’s years of tinkering with computers and operating systems, sourcing/scraping/querying/curating data, building data pipelines, cleaning data, engineering types of modeling approaches for various data types and desired outcomes against constraints (data, compute, economic, social/political), implementing POCs, finetuning models, mastering accelerated computing (aka GPUs, TPUs), distributed computation—and many others I’m sure I’m forgetting some here. The number of adjacent fields I’ve had to deeply scratch on to make any of this happen is stressful just thinking about it.
They’re fascinating machines, and they’ve been democratized/abstracted to an extent where it’s now as simple as import torch, torch.fit, model.predict. But to be dismissive of the amazing mathematics and engineering under the hood to make them actually usable is disingenuous.
I admit I have a bias here—I’ve spent the majority of my career building and deploying NN models.
What I meant when I said that they are advanced statistics is that that is what they do. I know that a lot of disciplines play a part in creating them. I know it’s incredible complicated, it took me quite a while to wrap my head around what the back-propagation algorithm.
I also know that neural networks can do some really cool stuff. Recognizing tumors, for example. But it’s equally dangerous to overestimate them, so we have to be aware of their limitations.
Edit: All that being said, I do recognize that you have spent much more time learning about and working with neural networks than I have.