This post describes why and how to train and run Neural Networks in optimized cross-platform desktop apps using JavaScript and TensorFlow.js
. A minimal code example is included at the end for research purposes.
- Why this approach?
- Local Models
- JavaScript
- TensorFlow.js
- How to do it
- Setting up the project
- Create project directory
- Initialize npm project
- Install dependencies (Electron, TensorFlow.js, electron-builder, TypeScript)
- Create main Electron file (main.ts)
- Window creation function:
- Model loading function:
- IPC handler for predictions:
- Create preload script (preload.ts)
- Exposing the API:
- Create HTML file for user interface (index.html)
- Create renderer script (renderer.ts)
- Event Listener:
- Getting the Input Value:
- Running the Prediction:
- Displaying the Result:
- Package the Application for Building
- Update package.json
- Create TypeScript configuration (tsconfig.json)
- Build and run the application
- Compile TypeScript files
- Run the app
- Build a distributable package
- Code Example
Why this approach?
Why have I decided to experiment with this oddball tech stack? Let’s unpack.
Local Models
Running local models protects data privacy, removes network latency, and enables offline use; all important considerations for real-time creative applications involving sensitive IP. These benefits come at the cost of model performance and higher system requirements than on the web.
JavaScript
Primary advantage of using JavaScript: much of your code will be portable to the web. You’ll also have access to tools you’re probably familiar with like React, Next.js
, and Electron.js
for developing cross-platform experiences.
I also choose JavaScript for demos and public-facing code because of its widespread familiarity among product developers and full-stack developers, who comprise a large percentage of my audience. If my code is readable, I expect Python developers to be able to follow along as well.
Finally, JavaScript was selected based on the observation that many JavaScript developers and AI startups are currently working on solving the same problem: how do we create a life-changing experience with AI?
TensorFlow.js
TensorFlow.js
is both cross-platform and optimized for CPU and GPU execution, which makes it stand out as a JavaScript ML library. Many models developed for TensorFlow in Python can be adapted to run in a JavaScript environment as well. While a model like Stable Diffusion may not easily work with this library, it comes with a variety of pre-trained models, tools, and resources that we can take advantage of.
In the end, I disregarded all of the above reasoning to experiment with building a desktop app using local models for fun and document the results to share with you.
How to do it
We’re going to create a single-file desktop application that runs a machine learning model locally using the following process:
- Setting up the project
- Create project directory
- Initialize npm project
- Install dependencies (Electron, TensorFlow.js, electron-builder, TypeScript)
- Create main Electron file (main.ts)
- Set up Electron app and window
- Load and initialize TensorFlow.js model
- Handle IPC for model predictions
- Create preload script (preload.ts)
- Expose safe APIs to renderer process
- Create HTML file for user interface (index.html)
- Basic structure for input and output
- Create renderer script (renderer.ts)
- Handle user interactions
- Communicate with main process for predictions
- Update package.json
- Define scripts for running and building
- Configure electron-builder
- Create TypeScript configuration (tsconfig.json)
- Set up TypeScript compiler options
- Build and run the application
- Compile TypeScript files
- Run the app
- Build a distributable package
Setting up the project
First, we need to set up a directory and install necessary dependencies via npm
.
Create project directory
mkdir tfjs-electron-app && cd tfjs-electron-app
Initialize npm project
npm init -y
Install dependencies (Electron, TensorFlow.js, electron-builder, TypeScript)
npm install electron@latest @tensorflow/tfjs @tensorflow/tfjs-node electron-builder
npm install --save-dev typescript ts-node
Create main Electron file (main.ts)
Let's walk through the main.ts file, which serves as the entry point for our Electron application and handles the TensorFlow.js model. I'll break it down section by section, focusing on the relevant bits.
After we import necessary modules from Electron, Node.js path module, and TensorFlow.js and declare variables for the main application window and the TensorFlow model, we need to write a window creation function.
Window creation function:
async function createWindow() {
mainWindow = new BrowserWindow({
width: 800,
height: 600,
webPreferences: {
nodeIntegration: false,
contextIsolation: true,
preload: path.join(__dirname, 'preload.js')
},
});
await mainWindow.loadFile('index.html');
mainWindow.on('closed', () => {
mainWindow = null;
});
}
This function creates the main application window, sets up security features, loads the HTML file, and handles window closure.
Model loading function:
async function loadModel() {
// Load your TensorFlow.js model here
model = tf.sequential();
model.add(tf.layers.dense({units: 1, inputShape: [1]}));
model.compile({loss: 'meanSquaredError', optimizer: 'sgd'});
// Train the model on some data
const xs = tf.tensor2d([-1, 0, 1, 2, 3, 4], [6, 1]);
const ys = tf.tensor2d([-3, -1, 1, 3, 5, 7], [6, 1]);
await model.fit(xs, ys, {epochs: 250});
}
This function creates and trains a simple TensorFlow.js model. In a real application, you'd likely load a pre-trained model instead.
IPC handler for predictions:
ipcMain.handle('runPrediction', async (event, input: number) => {
const inputTensor = tf.tensor2d([input], [1, 1]);
const outputTensor = model.predict(inputTensor) as tf.Tensor;
const outputData = await outputTensor.data();
return outputData[0];
});
This sets up an IPC (Inter-Process Communication) handler that allows the renderer process to request predictions from the model in the main process.
This structure allows us to load and run the TensorFlow.js model in the main process for better performance, while keeping the user interface responsive in the renderer process. The IPC mechanism provides a safe way for these processes to communicate.
That’s all we need to house our local model inside the Desktop! We still have a few things left to add, starting with a preload script.
Create preload script (preload.ts)
Certainly. The preload.ts
file is a crucial part of our Electron application, as it provides a secure bridge between the main process and the renderer process. Let's break it down:
We import two key modules from Electron:
contextBridge
: This allows us to securely expose APIs to the renderer process.ipcRenderer
: This lets us send messages from the renderer process to the main process.
Now, we need to expose our API.
Exposing the API:
contextBridge.exposeInMainWorld('electronAPI', {
runPrediction: (input: number) => ipcRenderer.invoke('runPrediction', input),
});
Let's break this down further:
contextBridge.exposeInMainWorld
: This method safely exposes our API to the renderer process. It takes two arguments:- The name of the global object we're creating ('electronAPI' in this case).
- An object containing the methods we want to expose.
runPrediction
: This is the method we're exposing. It's a function that takes a number as input.(input: number) => ipcRenderer.invoke('runPrediction', input)
: This is the implementation of ourrunPrediction
method. It usesipcRenderer.invoke
to send a message to the main process.'runPrediction'
: This is the channel name. It must match the channel name we used inmain.ts
withipcMain.handle
.input
: This is the data we're sending to the main process.
The ipcRenderer.invoke
method returns a Promise, which will resolve with the result from the main process.
This setup allows our renderer process (the web page) to call window.electronAPI.runPrediction(someNumber)
, which will securely communicate with the main process to run the prediction using our TensorFlow.js model.
The key benefits of this approach are:
- Security: By using
contextBridge
, we're not exposing the entireipcRenderer
to the renderer process, which could be a security risk. - Type Safety: By defining our API in TypeScript, we get type checking for our IPC calls.
- Separation of Concerns: The renderer process doesn't need to know about the details of how the prediction is run; it just calls a method and gets a result.
This preload script creates a clean, secure, and typed interface between our main process (where the TensorFlow.js model runs) and our renderer process (where the user interface lives).
Create HTML file for user interface (index.html)
Let's break down the index.html
file, which serves as the user interface for our Electron application:
<!DOCTYPE html>
<html>
<head>
<title>TensorFlow.js Electron App</title>
</head>
<body>
<h1>TensorFlow.js Prediction</h1>
<input type="number" id="input" placeholder="Enter a number">
<button id="predict">Predict</button>
<p id="result"></p>
<script src="renderer.js"></script>
</body>
</html>
Let's go through this HTML file section by section, focusing only on the relevant bits.
- Input Field:
type="number"
: This creates a number input field.id="input"
: This assigns an ID to the input, which we can use to access it in our JavaScript.placeholder
: This provides a hint to the user about what to enter.- Predict Button:
- Result Paragraph:
- Script Tag:
<input type="number" id="input" placeholder="Enter a number">
<button id="predict">Predict</button>
This creates a button labeled "Predict" with an ID of "predict".
<p id="result"></p>
This empty paragraph will be used to display the prediction result.
<script src="renderer.js"></script>
This includes our renderer script, which will handle the UI logic and communication with the main process.
This HTML structure provides a simple interface for our TensorFlow.js prediction app:
- Users can input a number.
- They can click a button to run the prediction.
- The result will be displayed on the page.
The actual functionality (handling button clicks, running predictions, updating the result) will be implemented in the renderer.js
file, which we’ll crack into next!
Create renderer script (renderer.ts)
Let's break down the renderer.ts
file, which handles the user interface logic and communicates with the main process:
document.getElementById('predict')?.addEventListener('click', async () => {
const input = (document.getElementById('input') as HTMLInputElement).value;
const result = await window.electronAPI.runPrediction(parseFloat(input));
(document.getElementById('result') as HTMLParagraphElement).innerText = `Prediction: ${result}`;
});
Let's go through this code step by step, focusing only on the relevant bits:
Event Listener:
document.getElementById('predict')?.addEventListener('click', async () => {
document.getElementById('predict')
gets the button element with id 'predict'.addEventListener('click', ...)
attaches a click event listener to the button.
Getting the Input Value:
const input = (document.getElementById('input') as HTMLInputElement).value;
as HTMLInputElement
is a TypeScript type assertion, ensuring we can access thevalue
property.
Running the Prediction:
const result = await window.electronAPI.runPrediction(parseFloat(input));
window.electronAPI.runPrediction
calls the method we exposed in the preload script.await
is used becauserunPrediction
returns a Promise (due to the IPC communication).
Displaying the Result:
(document.getElementById('result') as HTMLParagraphElement).innerText = `Prediction: ${result}`;
- We set the
innerText
of this element to display the prediction result.
Key points about this code:
- Error Handling: This basic version doesn't include error handling. In a production app, you'd want to add try/catch blocks to handle potential errors.
- DOM Manipulation: We're using basic DOM methods (
getElementById
,addEventListener
) to interact with the HTML elements. - IPC Communication: The
window.electronAPI.runPrediction
call is where we communicate with the main process. This API was exposed by our preload script.
This renderer script ties together the HTML interface with the TensorFlow.js model running in the main process. When the user clicks the "Predict" button, it takes the input, sends it to the main process for prediction, and then displays the result.
Package the Application for Building
Next, we need to set up package.json
and tsconfig.json
so that we can build our code. For simplicity, feel free to copy mine from below.
Update package.json
{
"name": "tfjs-electron-app",
"version": "1.0.0",
"main": "dist/main.js",
"scripts": {
"start": "tsc && electron .",
"build": "tsc && electron-builder --dir",
"dist": "tsc && electron-builder"
},
"build": {
"appId": "com.example.tfjs-electron-app",
"files": [
"dist/**/*",
"node_modules/**/*",
"package.json",
"index.html"
],
"directories": {
"output": "build"
},
"asar": true
},
"dependencies": {
"@tensorflow/tfjs-node": "4.20.0",
"typescript": "5.5.2"
},
"devDependencies": {
"electron": "31.0.2",
"electron-builder": "24.13.3"
}
}
Create TypeScript configuration (tsconfig.json)
{
"compilerOptions": {
"target": "ES2015",
"module": "commonjs",
"strict": true,
"esModuleInterop": true,
"outDir": "./dist",
"rootDir": ".",
"skipLibCheck": true
},
"include": [
"*.ts"
]
}
Lastly, we need to create a type globally within our Electron app to recognize the property we added in our preprocessor script in a global.d.ts
:
// global.d.ts
export {};
declare global {
interface Window {
electronAPI: {
runPrediction: (input: number) => Promise<number>;
}
}
}
Build and run the application
Now we’re ready to build and test our code!
Compile TypeScript files
Compile your code into dist/
using the following command:
npx tsc
Run the app
Now you’ll be able to run your desktop app using npm start
and see the user interface:
You should also see the following output from training inside your terminal:
npm start
> tfjs-electron-app@1.0.0 start
> tsc && electron .
Epoch 1 / 250
eta=0.0 ===============================================================>
17ms 2914us/step - loss=1.37
Epoch 2 / 250
eta=0.0 ===============================================================>
...
0ms 68us/step - loss=4.05e-3
Epoch 250 / 250
eta=0.0 ===============================================================>
0ms 57us/step - loss=3.97e-3
Build a distributable package
Bundle your code for distribution (including to App Stores) using the following command:
npm run dist
You’ll see output like the following in your terminal:
greynewell@grey tfjs-electron-app % npm i electron-builder --save-dev
npm WARN idealTree Removing dependencies.electron-builder in favor of devDependencies.electron-builder
up to date, audited 396 packages in 376ms
52 packages are looking for funding
run `npm fund` for details
found 0 vulnerabilities
greynewell@grey tfjs-electron-app % npm run dist
> tfjs-electron-app@1.0.0 dist
> tsc && electron-builder
• electron-builder version=24.13.3 os=23.4.0
• loaded configuration file=package.json ("build" field)
• description is missed in the package.json appPackageFile=/Users/greynewell/Projects/tfjs-electron-app/package.json
• author is missed in the package.json appPackageFile=/Users/greynewell/Projects/tfjs-electron-app/package.json
• writing effective config file=build/builder-effective-config.yaml
• rebuilding native dependencies dependencies=@tensorflow/tfjs-node@4.20.0 platform=darwin arch=arm64
• packaging platform=darwin arch=arm64 electron=31.0.2 appOutDir=build/mac-arm64
• downloading url=https://github.com/electron/electron/releases/download/v31.0.2/electron-v31.0.2-darwin-arm64.zip size=96 MB parts=8
• downloaded url=https://github.com/electron/electron/releases/download/v31.0.2/electron-v31.0.2-darwin-arm64.zip duration=11.959s
• default Electron icon is used reason=application icon is not set
• signing file=build/mac-arm64/tfjs-electron-app.app platform=darwin type=distribution identity=BD814295442EC6F30AA0529ED1124DA406741973 provisioningProfile=none
• skipped macOS notarization reason=`notarize` options were unable to be generated
• building target=macOS zip arch=arm64 file=build/tfjs-electron-app-1.0.0-arm64-mac.zip
• building target=DMG arch=arm64 file=build/tfjs-electron-app-1.0.0-arm64.dmg
• Detected arm64 process, HFS+ is unavailable. Creating dmg with APFS - supports Mac OSX 10.12+
• building block map blockMapFile=build/tfjs-electron-app-1.0.0-arm64.dmg.blockmap
• building block map blockMapFile=build/tfjs-electron-app-1.0.0-arm64-mac.zip.blockmap
On macOS
, this results in an installer .dmg
file being created in build/
.
Code Example
In case you don’t want to follow along with the above steps and just want a minimal example to run locally for research purposes, you may utilize the following repository. Don’t forget to install the needed dependencies before running npm comands.
If this content was helpful to you, consider giving the repo a star and follow me on GitHub!