Coding with PyTorch in Android Phone (Termux)

Chulayuth Asawaroengchai
2 min readDec 25, 2020

--

I previously wrote a blog about how to build and install Tensorflow in Android Termux environment here:

https://medium.com/analytics-vidhya/developing-tensorflow-on-android-phone-cfc4297b676e

This time, it is PyTorch turn. Luckily, PyTorch setting up process is fairly straight forward compared to Tensorflow. There is no complex build system like JVM/Bazel. All you need are cmake, along with some modification in ARM assembly code, then you are good to go.

I. Prerequisites

Note that if you already followed my last blog about installing Tensorflow inside Termux, you can skip this prerequisite section.

  1. Install Termux from Google PlayStore.
  2. Launch Termux.
  3. Make sure Termux is up to dated: pkg upgrade
  4. Install wget and proot: pkg install wget proot
  5. Create directory for installing ubuntu: mkdir ubuntu && cd ubuntu
  6. Download ubuntu chroot installation script: wget https://raw.githubusercontent.com/Neo-Oli/termux-ubuntu/master/ubuntu.sh
  7. Run the installation script: bash ubuntu.sh

Now, we can start ubuntu by running the script: ./start-ubuntu.sh

Next, we will check and install updates on Ubuntu image.

apt-get update && apt-get upgrade

Install some prerequisites.

apt-get install git wget vim zip build-essential python python3 python3-dev python3-pip libhdf5-dev

II. Setting up PyTorch inside Termux environment.

  1. Clone PyTorch from GitHub.
git clone --recursive https://github.com/pytorch/pytorch

2. Checkout the last known version that can be installed on ARM architecture.

cd pytorch
git checkout v1.5.0

3. Modify a part of ARM assembly code that is not originally compatible with ARM CPU in Android systems.

  • The description for the modification is to change how the operands access register values from 4 shorts into 16 bytes (which are exactly the same meaning in ARM CPU architecture). This change is necessary because ARM NEON assembler in Android does not understand 4s data type.
File: aten/src/ATen/native/quantized/cpu/qnnpack/src/q8gemm/8x8-dq-aarch64-neon.S
Line: 662
Change from:MOV V8.4s, V9.4s
MOV V10.4s, V11.4s
MOV V12.4s, V13.4s
MOV V14.4s, V15.4s
MOV V16.4s, V17.4s
MOV V18.4s, V19.4s
MOV V20.4s, V21.4s
MOV V22.4s, V23.4s
into:MOV V8.16b, V9.16b
MOV V10.16b, V11.16b
MOV V12.16b, V13.16b
MOV V14.16b, V15.16b
MOV V16.16b, V17.16b
MOV V18.16b, V19.16b
MOV V20.16b, V21.16b
MOV V22.16b, V23.16b

4. Issue build and setting up command.

python3 setup.py install

That is it. Now you can try loading PyTorch and start experiments in neural network training directly from your phone!

Happy coding anywhere.

--

--

No responses yet