Welcome to CS With James
In this post I will discuss how I built $1150 practical and powerful Deep learning Box.
Before start talking about my story I have to admit that the using ubuntu and the custom hardware is the best for the utilization, however, I am not just use this rig for Deep Learning I also going to use this for personal purpose, such as checking e-mails and watch youtube.
I got Asus GTX 1080 Ti card on $700 dollars. This can be little bit pricy, if you want to check the Deep Learning performance comparison please check this post. According to this post the 1080 Ti is the best option if you can afford it. Otherwise get 1080, 1070 Ti (not on the post), or 1070. If you say it is still expansive you can get 1060 with 6GB Ram. It is affordable and have plenty of Ram for someone just started with Deep Learning.
The price of 1080 Ti bumped up a little bit so for the price of GPU YMMV
For Computer, I bought Mac Pro 2009 (4,1) it is a old machine but I used Mac System for years and I don’t want to switch my main operating system to Ubuntu or Windows because I have lots of paid applications using on MacOS. This Mac Pro tower is the only Mac computer that I can install aftermarket graphics card. I paid $400 for this machine, fortunately the CPU and Ram was upgraded from the previous owner and it works flawlessly.
I needed some cables to power the 1080 Ti. Fortunately it was not expansive.
You can get both of the cables from aliexpress.com for very cheap but it will takes about 3 weeks to ship. You also can get those from the Amazon but it is little bit expansive compare to Aliexpress.
Put Hardwares Together
it was pretty simple, put your new GPU into the first slot of the PCI-e lane. The Mac Pro only support PCI-e 2.0 but I found that is not a big bottle neck. If you want to connect the monitor with your old GPU simply move it to another lane and you can use your new GPU only for computing. Connect the cables to the motherboard and the SATA connector that you are not using. If you are using all the SATA Power then you can try to power 8 pin with 6 pin and some people argue that is safe but I prefer to power it with two sata power connectors.
That is pretty much it for hardware
For doing the Deep Learning on Mac this part can be little bit tricky. the Nvidia released driver and coda driver for MacOS, but seems like it doesn’t perform as good as the Windows. However, if you decided to use MacOS as your Deep Learning Platform then you have to deal with it.
Google the version of your OS with the Nvidia driver, so you can download specific version that is designed for your version of the MacOS
For example, I use 10.13.2, so I have to search “10.13.2 Nvidia web driver” to look for specific version of driver. Next step is install CUDA. you have to install latest CUDA Driver but you should not install latest CUDA Toolkit. if you want to use Google built binary for TensorFlow you have to install CUDA Toolkit 8.0 with the Driver and then you can update the driver version in the System Preference -> CUDA. install cuDNN v5.1 for CUDA 8.0 so it is compatible with TensorFlow v1.1.0 which is latest version of TensorFlow GPU supported pre-built binary for MacOS.
You can use your System like this, or you can build the TensorFlow from the source to use CUDA 9.0 and cuDNN v7.
Allow Outside Access
This can be little bit complicate depends on your situation. Usually most of the modern routers support the DDNS, so you don’t have to worry about your IP address changes dynamically. In my case I uses Asus AC1900 and it have built in DDNS support so I didn’t have any problem.
You have to make your computer Internal static IP. You can usually do it under LAN Options. Select your Deep Learning Machine and assign a static IP address.
Next thing you have to do is port forwarding. This will allow all the connection through specific port will route to the assigned computer. You have to assign port 22 to your Machine Learning computer.
Open SSH Server.
In Mac it is pretty simple. go to “System Preference -> Sharing” enable “Remote Login” add your account for access.
From remote computer you can access to your Deep Learning machine to do the coding and run the code on GPU
Example) ssh Jamesjeon@jamesjeon.asuscomm.com
Your computer have to always on for access but I will use my machine for mining when I am not using it for Deep Learning so for me it is fine.
I am planning to install Jupytor Notebook for enhanced productivity I will have a different post to explain it.
I am so far happy with my new setup and I am experiencing pretty fast training speed in 1080 Ti. It is true that there are some down sides. I am not happy with the TensorFlow version and there are less information on the web compare to Ubuntu. However, I am also using this machine for video editing rig and general personal use and I am so happy with the Apple integration of my setup and information.
FYI, You can install up to three GPUs but you have add extra AC to DC Brick into your system. However, the brick fits beautifully into the ODD bay so there is no wires dangling inside the case or out side of case. It looks stock!!