Apple's new M1 Max and M1 Pro MacBooks
But, did they really? Let's dig a bit deeper. It was only one year ago that the M1 chip was unveiled, Apple's first custom System-on-a-Chip (SoC). At that time, you could install 16GB of main memory at most. And, in the MacBook Pro, this chip has 8 CPU and 8 GPU cores. Compared to the most recent models, this sounds small, does it not?
The answer is: It depends. I regularly use an M1 MacBook Pro for my work, and the only thing that I found to be limiting is the small RAM. If I were to select this machine again, and 32 or even 64 GB were options, I would surely choose them.
But, even with all these impressive specs, let’s not forget that we have to bring power to the road. For us, that mainly means doing Machine Learning tasks such as training networks or exploring datasets. And this is where the troubles begin.
Yes, the M1 chip is fast and energy-efficient. But frequently, it's a hassle to install python packages from pip. Installing TensorFlow is not as easy as running
When I attempted this, it took me three days to get the system up and running. Even though Apple provided a dedicated TensorFlow version, it was still challenging to get things going.
Thankfully, this improved when they announced the support for TensorFlow's PluggableDevice mechanism, which aims at improving the support for different accelerators (GPU, TPU, custom SoCs, and the like). With that release, things went smoother--at least regarding getting the framework to run.
After one year, I still regularly encounter troubles when trying to install packages natively from pip. Yes, using Anaconda makes things easier. But, once you are about to deploy your scripts, you likely want to freeze the requirements and use the same setup on the remote environment. Then, you have to handle two requirement files: One from Anaconda and one from pip.
So, what's the solution after all this lamenting? The first and easiest solution: Use a Linux machine with a time-tested AMD or Intel CPU and pair it with a dedicated GPU.
The second solution: Embrace the learning. Yes, I have spent three days getting TensorFlow to work. And yes, I took long detours to install desired packages. But, as I was stuck with the M1 chip, I had no other option than to go through this.
And I've learned a lot of things. I dug into compiling packages myself. I dug into using Anaconda. I dug into pip. I scrolled through GitHub for hours and learned about the importance of writing concise questions.
So, would I recommend the M1 chips? Yes, but you have to be prepared to learn. It's not as easy as plug-and-play. But it's not rocket science, either.