1 февраля, 2022 от Yurii Vlasiuk

Best practices for building CLI and publishing it to NPM

Best practices for building CLI and publishing it to NPM
developmenttechnology

Introduction

CLI stands for the command-line interface. It’s a common way to interact with different processes. Tools with CLI are often used in software development, DevOps, or even testing. Popular examples of CLI that you may have used already are Docker, Docker Compose, htop, Webpack, and PM2. All of them provide standardized parameters, arguments, and usage descriptions (help section). In everyday life, CLI gives you a better developer experience and you can even implement it by yourself to automate routine tasks.

As a developer, I often use simple scripts to automate some routine tasks. And it’s quite important to be able to create such utils with clear API, informative output, and easy-to-read and update codebase. In this article, I want to tell you about our best practices for building CLI tools. As a result, you will get recommendations on how to improve your NodeJS scripts and publish them to NPM as a module. Let’s go.

Dependencies and approaches

1.1 Command parameters description and parsing

Usually, for CLI tools written on NodeJS we use docopt which provides a standard interface for utils with parameters. First of all, docopt itself is a standard language of parameters description for command-line tools. Such an approach was used to have the description of the same parameters for terminal utils implemented in different programming languages. So this means that if you have a tool with an interface in one language, it can be easily replaced with another implementation providing just the same interface.

This module carries out parameters parsing according to docopt language based on documentation you’ve described. Such scripts always became easy to use as they had documentation already out of the box. It looks like this:

Usage:
quick_example.js tcp <host> <port> [--timeout=<seconds>]
quick_example.js serial <port> [--baud=9600] [--timeout=<seconds>]
quick_example.js -h | --help | --version
</seconds></port></seconds></port></host>

Additionally, there are also less standard approaches for arguments description like commander. Here, for instance, is a very popular library which has enough documentation and examples:

const { Command } = require('commander');
const program = new Command();
program.version('0.0.1');
 
program
 .option('-d, --debug', 'output extra debugging')
 .option('-s, --small', 'small pizza size')
 .option('-p, --pizza-type <type>', 'flavour of pizza');
 
program.parse(process.argv);
 
const options = program.opts();
if (options.debug) console.log(options);
console.log('pizza details:');
if (options.small) console.log('- small pizza size');
if (options.pizzaType) console.log(`- ${options.pizzaType}`);

Personally, I prefer to use docopt as a universal and well-known API for everyone.

1.2 Interactive dialogue stepper

However, sometimes there are situations when you want a result tool that acts more interactively (e.g. asks questions, provides versions of answers) since this makes the user experience more friendly and simple. This approach is usually used in different installers or configuration utils that have few steps to achieve results.

For this purpose, you can use an amazing library inquirer. It’s a collection of common interactive command-line user interfaces. The library inquirer has well-written documentation with good examples for different use cases. It also has pluggable modules which can extend existing library input types. Besides this, its usage is quite straightforward.

To change colors in NodeJS output, you can use colors references that should wrap output string in your console.log statement:

1.3 Output formatting

To change colors in NodeJS output, you can use colors references that should wrap output string in your console.log statement:

console.log('\x1b[36mI am cyan\x1b[0m');  //cyan
// OR
console.log('\x1b[36m%s\x1b[0m', 'I am cyan');  //cyan
 
console.log(`\x1b[33m${stringToMakeYellow}\x1b[0m`);  //yellow
// OR
console.log('\x1b[33m%s\x1b[0m', stringToMakeYellow);  //yellow

Also, the same thing but in a more elegant way can be achieved using the colors library which lets you make your output more fancy-formatted and also easy to read:

const colors = require('colors');
 
console.log('hello'.green); // outputs green text
console.log('i like cake and pies'.underline.red) // outputs red underlined text
console.log('inverse the color'.inverse); // inverses the color
console.log('OMG Rainbows!'.rainbow); // rainbow
console.log('Run the trap'.trap); // Drops the bass

Digging into the colors npm module we can find that it’s also altering the String prototype. If you prefer the prototypes to be left alone, use the following code instead:

const colors = require('colors/safe');
console.log(colors.red('Text in red'));

Now let’s look more in detail at how to set up a project and use these libraries in the next paragraph.

Project initialization

1) To have your CLI tool as an npm module that can be used globally (installed with npm i -g and then can be used from any folder) you need first to initialize the npm project:

git init 
npm init

After passing through the steps of npm init you will get a folder with package.json in it and can therefore start development. It will also be better if your project is linked with GitHub or GitLab repository (it will be used in the last steps after publishing to npm).

2) After this, you need to install dependencies. In our case, they are as follows:

npm i docopt inquirer colors

3) To make your script run from Node JS you need to add hashbang at the beginning of the file:

#!/usr/bin/env node

4) Give your script execution rights:

sudo chmod u+x script_name.js

5) Another option is to add command mapping in package.json. This implies you can check CLI’s behavior on the end-user system. In your CLI project, you can still edit source code, but using the command interactive from any location will trigger its actual version.

"bin": {"interactive": "./bin/interactive_cli.js"}

6) And finally to develop your global CLI tool, you simply need to provide npm location of the directory where the module is located:

npm install -g ./interactive_cli

After that, just running the command from mapping from step 5 you will get your script executed like it’s installed globally from npm. Now you have set up everything to develop and test your util.

Development

3.1 Parameters parsing

First of all, you need to define the signature for your CLI tool (which parameters are required to be passed to your script). This can be handled with docopt. For this, we include the docopt function from the library and describe parameters.

You can define various types of parameters:

— required values: ./connect.js <host> <port>

— required key-value pairs: ./connect.js --host <host> --port <port>

— optional key-value pairs: ./connect.js [--host <host>] [--port <port>]

— short/long required parameter: ./connect.js (-l <login> | --login <login>)

— short/long optional parameter: ./connect.js [-p <pass> | --password <pass>]

Multiple parameters with short/long flag forms should be closed with square brackets or parenthesis depending on whether they are required or optional arguments.

Complete documentation example should contain parameters combinations that are used together (if there are any) and a list of all parameters with a short description for each of them.

const { docopt } = require('docopt');
 
const doc = `
Usage:
   client tcp --host <host> --port <port> [-t <seconds> | --timeout <seconds>]
   client serial <serial_port> [-b | --baud 9600] [-t <seconds> | --timeout <seconds>]
   client -h | --help | --version
 
Options:
   <serial_port>                        Location of device with serial port
   --host <host>                        Host of process to connect
   --port <port>                        Port of process to connect
   -t <seconds> | --timeout <seconds>   Time to wait connection in seconds
   -b --baud                            Baud value
`
const { version } = require('./package.json')
 
function main() {
   const params = docopt(doc, { version });
 
   console.log(params);
}
 
main();</seconds></seconds></port></host></serial_port></seconds></seconds></serial_port></seconds></seconds></port></host>

The code above which is called with parameters client serial /dev/cu.SLAB_USBtoUART, will give the output like this (when we select a serial port, we define the interface that will be used to connect the IoT device to it):

{
 '--help': false,
 '--host': null,
 '--port': null,
 '--timeout': null,
 '--version': false,
 '-h': false,
 '<serial_port>': '/dev/cu.SLAB_USBtoUART',
 serial: true,
 tcp: false
}
</serial_port>

3.2 Validation

Good to know that the parameters which the user passed in your script will not break everything. Docopt itself makes checks that all the required parameters were passed. In the false scenario, the script will show documentation (value of doc variable) for the user to review what was missed. But sometimes it is also nice to give detailed info on which parameter had invalid value and why. To do this, it is better to use some more advanced validator that can check all arguments values for correct types. At WebbyLab we usually use livr module with livr-extra-rules extension.

It has well written documentation with useful multiple examples. For our parameters from previous example validation function will look something like this:

const LIVR = require('livr');
const extraRules = require('livr-extra-rules');
 
LIVR.Validator.registerDefaultRules(extraRules)
 
const validator = new LIVR.Validator({
   '--host'        : [ 'string' ],
   '--port'        : [ 'positive_integer', { 'number_between': [ 1, 65535 ] } ],
   '<serial_port>' : [ 'string', { 'like': '\/dev\/.+' } ],
   '--timeout'     : [ 'positive_integer' ],
   serial          : [ 'required', 'boolean' ],
   tcp             : [ 'required', 'boolean' ]
});
 
const validParams = validator.validate(params);</serial_port>

Also, it is possible to define custom validation rules and plug them to extend existing in the library, if it’s not enough default.

3.3 Interactive utils

As was stated before, scripts are good, but sometimes it’s better to have some interactive dialogue that will lead you during installation to another setup process. For this purpose, I recommend using inquirer. API of this module is simple — to get dialogue at the start, you need to require library class instance and then call on its prompt method and collect answers in then:

const inquirer = require('inquirer');
 
inquirer
   .prompt([
       /* Pass your questions in here */
   ])
   .then((answers) => {
       // Use user feedback for... whatever!!
   })

You can pass in the prompt as many questions as you have and also with different types of inputs which are provided by the library. And inquirer is also pluggable (some additional inputs can be installed as separate dependencies).

Let’s have a look at a real example of how it could be used. In our projects, we usually have multiple docker-compose files which should be used together and include many services. For some tasks, you don’t need all services and can start operating with only a few of them (to reduce the load on your poor PC). For this, you often need to execute commands like:

docker-compose -f docker-compose.yml -f docker-compose.sandbox.yml -f docker-compose.development.yml mysql backend emqx nginx

Each time depending on the task list of needed services could be different. At some point, it became a bit complicated to edit these long commands each time. So I decided to build an interactive CLI which asks you which docker-compose parameters to apply in several steps. It parses for you docker-compose files, lists all services you want to start and lets you choose. Besides this, you can filter services by name and pick different commands. The interface for this looks like this:

To build such a tool, it was required only to read and parse YAML files in get_services and then construct prompt from inquirer:

const composeFiles = await get_compose_files();
const services = get_services(composeFiles);
 
inquirer.prompt([
       {
         type: 'autocomplete',
         name: 'command',
         choices: () => commands,
         when: () => !commands.includes(commandArgument),
         source: (answers, input) => source_search(commands, answers, input)
       },
       {
         type: 'checkbox-plus',
         message: 'Choose service',
         name: 'services',
         suffix: ' (Press <space> to select, type name to search):',
         pageSize: 10,
         highlight: true,
         searchable: true,
         when : (answers) => {
           const command = answers.command || commandArgument;
           return commandsWithServices.includes(command);
         },
         choices: async (answers) => {
           return services;
         },
         source: (answers, input) => {
           return source_search(services, answers, input)
         }
       }
   ], []).then(async (answers) => {
       const args = prepare_args(composeFiles, answers)
       run_command('docker-compose', args)
});
</space>

As you see, I used two prompt entries: first for choosing commands to execute (single choice) and the second for services list (multiple choice). Both inputs provide a scrollable list and can be filtered by typing a name. 

You can check the full source for this util on github or install from npm and check how it works.

3.4 Progress tracking

Sometimes you write scripts for processing some time-consuming jobs. In this case, it is good to show how this task is going and is it going at all. For most cases, it’s great to have at least log periodically to show script users, for example, how many records were processed. For this (to not create a lot of mess in the terminal and huge output) it’s better to log each 100/1000/10000s record processed:

for (const index in items) {
   if (!(index % 100)) console.log(`Processed ${index}s records from ${items.length}`)
 
   // doing some processing
   await items[index].process()
}

Another approach can make your script user experience even more enjoyable. You can display the actual progress of what amount was processed in the same line. To do this, you can use npm modules like progress or less popular cli-progress. Here is an example of usage progress package:

var ProgressBar = require('progress');
var bar = new ProgressBar(':bar', { total: 10 });
var timer = setInterval(function () {
 bar.tick();
 if (bar.complete) {
   console.log('\ncomplete\n');
   clearInterval(timer);
 }
}, 100);

Overall, it’s not so hard to write a simple function that will do this without additional dependencies:

const bar = '#';
 
function showProgress(current, totalCount) {
   const percent  = Math.round((current / totalCount) * 100);
 
   process.stdout.clearLine();  // clear current text
   process.stdout.cursorTo(0);  // move cursor to beginning of line
   process.stdout.write(`Processed ${current} / ${totalCount}. Progress ${percent}% `.white + `${bar.repeat(percent)}`.green);
}

Also, the usage is quite simple:

for (const index in items) {
   if (!(index % 100)) showProgress(index, items)
 
   // doing some processing
   await items[index].process()
}

3.5 Logging

As mentioned before, it’s a crucial moment to inform users about the progress of execution of your script. Sometimes scripts don’t have the processing of multiple entities but still have some important stages of execution. In this case, it’s fair to log each of the important milestones of execution as well as errors that could have happened. The most common example is establishing a connection to some process and then doing a job. Let’s have a look on how it could be implemented with the MQTT client:

const mqttClient = mqtt.connect(mqttUrl, { username, password, rejectUnauthorized: false });
 
return new Promise((resolve, reject) => {
   mqttClient.on('connect', () => {
       console.log(`Connected to ${mqttUrl}`);
 
       mqttClient.subscribe(topicsToSubscribe, (err) => {
           // Do the job
       });
   });
   let messagesCount = 0;
 
   mqttClient.on('message', (topic, message) => {
       // Do the job
 
       if (!(messagesCount % 1000)) console.log(`Messages synced: ${messagesCount}`);
   });
 
   mqttClient.on('error', (error) => {
       console.log(error);
 
       return reject(error);
   });
});

As you can see, we gave user the information in the output about three things here:

  • connection establishment
  • error in case connection was not set
  • progress of processing messages with step 1K

The main rule here is that the users should not add logs to your script by themselves to understand what is happening.

3.6 Time measurement

Sometimes it’s useful to show the user how much time the script is already running (it can help to decide if it’s needed to interrupt the process). Of course, you can output raw millisecond values, working with Date object.

const startTime = +new Date();
 
// Do something
 
const endTime = +new Date();
 
console.log(`Execution time: ${endTime - startTime}`);

Using console.time can be also useful to measure the execution time of different stages:

console.time('Some Job');
 
// Do Some job
 
console.timeEnd('Some Job');
 
console.time('Another Job');
// Do Another job
 
console.timeEnd('Another Job');

But if you want format time in human-readable form you can include tiny library for this called pretty-ms and get well-formatted time output easily with it:

const prettyMilliseconds = require('pretty-ms');
prettyMilliseconds(1337000000);
//=> '15d 11h 23m 20s'
prettyMilliseconds(1337);
//=> '1.3s'
prettyMilliseconds(133);
//=> '133ms'
// `compact` option
prettyMilliseconds(1337, {compact: true});
//=> '1s'
// `verbose` option
prettyMilliseconds(1335669000, {verbose: true});
//=> '15 days 11 hours 1 minute 9 seconds'

3.7 GUI in terminal

There are also cases when  the more exotic output in the terminal is required. For example, you want to track some statistics about processes in a table (memory and CPU consuming, etc.) and need to display it in the table which is simultaneously updating. And it’s possible to do this using modules like blessed and charm. These libraries give you mechanisms to draw in terminal boxes, fill them with lists of items, operate the whole window output, clear it, move the cursor, and many more.

There is also a React version of module react-blessed which implements React renderer for the blessed library.

Publishing to NPM

When your util is ready, the last step is to publish it on NPM as a separate module. Do not forget to add descriptive README and link GitHub/GitLab repository to our package.json:

"repository": {
   "type": "git",
   "url": "git@github.com:unsigned6/docker-compose-helper.git"
}

If you have some files which you want to have on GitHub (e.g. images from documentation) but want to avoid users downloading them from npm each time when they install your module, a good practice is to add them in .npmignore (file syntax is the same as .gitignore). 

After everything is set, you also have to be sure that the name you picked is free on NPM (the occupied one won’t be possible to publish).

To publish the CLI, you also need to register an account on NPM and login to the terminal:

npm login

You’ll be prompted to enter your username, password, and email address. After that, you are ready to publish the package at last:

npm publish

That’s it, your package is on NPM now 😉.

Conclusion

So we’ve looked over different approaches to the implementation of CLI utils — from classical arguments passing to interactive dialogue steppers. Also, we went through popular libraries for implementing tools. I hope these examples will help you improve your custom CLI tools and make them easier to use. Thanks for reading.

Yurii Vlasiuk
Yurii Vlasiuk
Опубликовано: 1 февраля, 2022
Вам также может понравиться
How to manage a project in 5 steps

Tutorialbusiness

How to manage a project in 5...

Do you know that only 26% of IT projects are completed on time and budget, 46% are late or over budget, and 28% fail? See how project management can assist you in meeting deadlines.

15 декабря, 2021 от Mariia Kunieva