IBM Code Engine: A Practical Guide to Bash Script Logs
Tired of silent Bash scripts in IBM Code Engine? Learn how to master logging, from basic echo commands to structured JSON logs and integration with Log Analysis.
David Lee
A Senior DevOps Engineer specializing in cloud-native applications and serverless architectures.
You’ve crafted the perfect Bash script. It automates a tedious task, processes some data, or runs a critical backup. You package it up, deploy it as a Job in IBM Code Engine, and hit "Run." The job status eventually changes to "Succeeded," but a nagging question remains: what actually happened? Did it process all the files? Were there any minor, non-fatal errors? Without logs, your script is a black box, leaving you in the dark.
IBM Code Engine is a phenomenal serverless platform that lets you run anything from complex microservices to simple scripts without worrying about infrastructure. But this power and simplicity come with a new set of challenges, and one of the most common is visibility. When your script isn't running in an interactive terminal on your laptop, you can't just watch the output scroll by. You need a deliberate, robust logging strategy.
This guide is here to demystify that process. We'll move from the absolute basics to more advanced, structured logging techniques, giving you the practical tools you need to make your Bash scripts in Code Engine transparent, debuggable, and trustworthy. Let's turn that black box into a glass one.
Why Standard Logging is Key in Code Engine
Before diving into commands, it's crucial to understand the fundamental mechanism Code Engine uses for logging. Like most containerized and serverless environments (including Docker and Kubernetes), Code Engine doesn't magically find log files you write to disk. Instead, it captures two standard I/O (Input/Output) streams:
- Standard Output (stdout): This is the default stream for normal, informational output from a program.
- Standard Error (stderr): This stream is designated for error messages and diagnostics.
Any text your Bash script writes to either stdout
or stderr
is automatically collected by the Code Engine platform and treated as a log entry. If your script isn't writing to these streams, as far as Code Engine is concerned, it's running silently.
The Basics: Capturing Output with echo
and printf
The simplest way to write to stdout
is with the commands you already know and love. Both echo
and printf
are your primary tools for generating log messages.
Using echo
is straightforward:
echo "INFO: Starting the data processing job..."
printf
offers more control over formatting, which can be useful for including variables. It's generally considered a more robust option than echo
, as its behavior is more consistent across different shells.
FILENAME="dataset_2025-01-15.csv"
printf "INFO: Processing file: %s\n" "$FILENAME"
By default, both of these commands write to stdout
, making their output immediately visible in Code Engine's logs.
The Power of Redirecting: stdout vs. stderr
Just printing everything to stdout
works, but it's not ideal. When you're sifting through hundreds of log lines, you want to be able to quickly distinguish between normal operational messages and critical errors. This is where stderr
comes in.
In Bash, you can redirect the output of a command to a specific stream. File descriptor 1
represents stdout
, and 2
represents stderr
. To send a message to stderr
, you redirect it using >&2
.
# This goes to stdout (normal log)
echo "INFO: Checking for source file..."
# This goes to stderr (error log)
echo "ERROR: Source file not found!" >&2
Why is this so important? Most logging tools, including the Code Engine UI and IBM Log Analysis, can filter or color-code messages based on their stream. By sending errors to stderr
, you make it trivial to spot problems without having to read every single line.
A Practical Example: A Simple Backup Script
Let's put this together in a script that could be run as a Code Engine job. This script will copy a file, adding a timestamp, and it will log its progress and any potential errors correctly.
#!/bin/bash
# Exit immediately if a command exits with a non-zero status.
set -e
SOURCE_FILE="/app/data.txt"
BACKUP_DIR="/app/backups"
echo "INFO: Starting backup process for $SOURCE_FILE..."
mkdir -p "$BACKUP_DIR"
if [ ! -f "$SOURCE_FILE" ]; then
echo "ERROR: Source file not found at $SOURCE_FILE. Cannot create backup." >&2
exit 1
fi
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
BACKUP_FILE="$BACKUP_DIR/data-$TIMESTAMP.bak"
echo "INFO: Creating backup: $BACKUP_FILE"
cp "$SOURCE_FILE" "$BACKUP_FILE"
# Check if the copy was successful
if [ $? -eq 0 ]; then
echo "SUCCESS: Backup completed successfully."
else
echo "ERROR: Failed to copy file to $BACKUP_FILE." >&2
exit 1
fi
In this example:
- Informational messages about the process starting and succeeding are sent to
stdout
viaecho
. - Critical error messages (file not found, copy failed) are explicitly redirected to
stderr
with>&2
. - The script exits with a non-zero status code (
exit 1
) on error, which clearly marks the Code Engine job run as "Failed."
Structuring Your Logs for Clarity
As your scripts become more complex, you'll want more than just simple text messages. Structured logging is the practice of formatting your logs in a consistent, machine-readable way, often using JSON. This makes your logs incredibly easy to search, filter, and analyze in a proper logging tool.
Even without full JSON, you can add structure. A simple improvement is to add a log level and a timestamp to every message.
echo "INFO: $(date -u +%Y-%m-%dT%H:%M:%SZ) - Backup process starting..."
For the ultimate in parsability, you can format your logs as JSON objects. This is more verbose but pays huge dividends when you need to query your logs.
FILENAME="data.txt"
echo "{\"level\": \"INFO\", \"timestamp\": \"$(date -u +%Y-%m-%dT%H:%M:%SZ)\", \"message\": \"Processing file\", \"file\": \"$FILENAME\"}"
Here’s a quick comparison:
Log Style | Pros | Cons |
---|---|---|
Plain Text | Simple to write, easy for humans to read. | Difficult to parse automatically, hard to filter on specific data points. |
Structured (JSON) | Machine-readable, powerful querying and filtering, easy integration with log analysis platforms. | More verbose, can be harder for humans to read without tools, requires careful escaping in Bash. |
Advanced Techniques: Logging Functions
To avoid repeating logging logic and ensure consistency, it's best practice to create dedicated logging functions in your script. This centralizes your log formatting and makes the main body of your script much cleaner.
#!/bin/bash
# Central logging function
log() {
local level=$1
shift
local message=$@
echo "[${level}] [$(date -u +%Y-%m-%dT%H:%M:%SZ)] - ${message}"
}
# Helper functions for different levels
log_info() {
log "INFO" "$@"
}
log_error() {
log "ERROR" "$@" >&2
}
# --- Main script logic ---
log_info "Starting the application."
FILENAME="/app/data.txt"
if [ ! -f "$FILENAME" ]; then
log_error "File not found: $FILENAME"
exit 1
fi
log_info "Successfully validated file presence."
With this approach, you can easily change your log format in one place (the log
function) and have it apply to your entire script. The intent of each log message is also clearer (log_info
vs. log_error
).
How to Access Your Logs in Code Engine
Now that your script is generating beautiful, informative logs, how do you see them?
Via the IBM Cloud CLI
The CLI is the quickest way to get logs for a specific job run. First, you might need to list the job runs to find the name of the one you're interested in, then fetch its logs.
# Ensure you're targeting your Code Engine project
ibmcloud ce project select --name my-project
# List runs for a job named 'my-backup-job'
ibmcloud ce jobrun list --job my-backup-job
# Get logs for a specific job run
ibmcloud ce jobrun logs --name my-backup-job-run-abcde
Via the IBM Cloud Console (UI)
For a more visual approach:
- Navigate to your Code Engine project in the IBM Cloud Console.
- In the left-hand menu, click Jobs.
- Select the job you're interested in (e.g.,
my-backup-job
). - Go to the Job runs tab.
- Click on the name of the specific run you want to inspect.
- On the job run's details page, you'll find a Logs tab. This will show you the combined
stdout
andstderr
output from your script.
Going Further: Integrating with IBM Log Analysis
The logs you see in the Code Engine UI are transient and have limited search capabilities. For production workloads, you need a centralized, long-term logging solution. This is where IBM Log Analysis comes in.
You can configure your Code Engine project to automatically forward all logs to an IBM Log Analysis instance. This is done in your project's settings under "Logging." Once configured, you gain several powerful advantages:
- Long-Term Storage: Retain logs for days, weeks, or months.
- Powerful Searching: Use a robust query language to search across thousands of logs and filter by level, timestamp, or any field in your structured JSON logs.
- Alerting: Set up alerts to be notified automatically when a specific error (like your `ERROR: File not found` message) appears in the logs.
Conclusion: From Silent Scripts to Insightful Logs
Effective logging is not an afterthought; it's a core component of building reliable, maintainable applications in a serverless environment like IBM Code Engine. By understanding and using the standard output and error streams, you can transform your Bash scripts from opaque processes into transparent, debuggable workflows.
Start simple with echo
, distinguish between information and errors with >&2
, and graduate to structured logging with functions and JSON as your needs grow. By mastering these techniques, you'll spend less time guessing and more time building, confident that you have the visibility you need to keep your jobs running smoothly.