Study Note: Unix-Like Commands


Table of Contents

Utility

Archiving, Splitting, and Encrypting Large Files for DVD Storage

Image Processing with sips (Scriptable Image Processing System)


Emacs

Advanced string replacement techniques in Emacs 🔍


Zsh

Apple’s Transition from Bash to Zsh

Using Wildcards with scp in Zsh

Setting Up Environment Variables and Frequently Used Commands in Linux and macOS (zsh or bash)

My Own .bashrc or .zprofile


Advanced Unix-Like Command Explanatory

Collections of Unix-Like Commands

Searching Through Files for Text

Locating Installed Command-Line Tools on macOS: find "$(brew --prefix)" -name [tool_name] -type f

Managing Processes in Linux and macOS

Accessing and Mounting External Drives

Viewing Shell History Using Emacs: emacs ~/.zsh_history


Debian OS from macOS

Downloading Debian ISO Using Jigdo on macOS

Creating a Bootable SD Card for Debian Installation Using Windows and macOS

Configuring Debian to Use Local ISO Repositories with Fallback to Online Sources


Linux, for Kernel-Level Device Driver

Linux Commands for Kernel and Device Programmers

Linux Kernel Programming and Module Compilation on Raspberry Pi




Utility


Archiving, Splitting, and Encrypting Large Files for DVD Storage

When archiving large folders for storage across multiple DVDs, encryption can be added to enhance security. The following details the steps for archiving, splitting, and encrypting, with a focus on flexible and secure methods.


(A) General Archiving and Splitting Approach

To efficiently manage large folders for single-layer DVD storage (approx. 4.5GB), the tar and split commands can be combined:

tar -zcvf - folder_name | split -b 4500M - archive_name.tar.gz.

To restore the split archive, the following steps can be used to concatenate the parts and extract the archive:

cat archive_name.tar.gz.* > full_archive.tar.gz
tar -zxvf full_archive.tar.gz

Alternatively, both steps may be combined into one:

cat archive_name.tar.gz.* | tar -zxvf -

type archive_name.tar.gz.* | tar -zxvf - (Windows)        

(B) Advanced Splitting for Specific DVD Sizes

The split command allows for adjustments based on DVD size:

In each case, the archive is split into chunks that can fit onto DVDs, with the files being named sequentially.


(C) Encrypting the Archive

Encryption can be added using external tools like GPG or OpenSSL since neither tar nor split directly support password protection.

C-1) Using GPG for Encryption

GPG provides AES-256 symmetric encryption for securing the tarball with a password.

Encrypting and Splitting the Archive:

tar -zcvf - folder_name | gpg --symmetric --cipher-algo AES256 -o archive_name.tar.gz.gpg
split -b 4500M archive_name.tar.gz.gpg archive_name_split.

Restoring the Archive:

cat archive_name_split.* > full_archive.tar.gz.gpg
gpg -o full_archive.tar.gz -d full_archive.tar.gz.gpg
tar -zxvf full_archive.tar.gz

This restores the archive by combining the parts, decrypting, and extracting the tarball.

C-2) Using OpenSSL for Encryption

OpenSSL is another option for adding password-based encryption to the tar archive.

Encrypting and Splitting the Archive:

tar -zcvf - folder_name | openssl enc -aes-256-cbc -e -k 'password' -out archive_name.tar.gz.enc
split -b 4500M archive_name.tar.gz.enc archive_name_split.

Restoring the Archive:

cat archive_name_split.* > full_archive.tar.gz.enc
openssl enc -aes-256-cbc -d -k 'password' -in full_archive.tar.gz.enc -out full_archive.tar.gz
tar -zxvf full_archive.tar.gz

This process ensures that large folders can be efficiently archived, split, and encrypted for secure storage across multiple DVDs.




Image Processing with sips (Scriptable Image Processing System)

(A) Converting PNG to JPEG/JPG

To convert a PNG file to a JPEG file, the following command can be used:

sips -s format jpeg input.png --out output.jpg

Note: The extensions .jpg and .jpeg are interchangeable. The sips command processes both formats the same way.


(B) Converting JPEG/JPG to PNG

To convert a JPEG (or JPG) file to a PNG file, the following command is used:

sips -s format png input.jpg --out output.png

Emacs


During undergraduate studies in Computer Science, Emacs was recommended and has been used for over two decades. Familiarity with its shortcuts has facilitated work in C kernel programming and debugging. This document serves both as a guide for readers to grasp the benefits of Emacs and as a resource for personal learning, combining well-known features with newly explored aspects intended for future use.


Emacs is particularly well-suited for individuals who prefer a fully keyboard-driven workflow. This feature enables the execution of virtually any task—be it editing text, managing files, running commands, or browsing the web—without relying on a mouse. Such efficiency stands as one of the most compelling reasons users continue to utilize Emacs even after many years.

Core Benefits and Features:


Buffers in Emacs are fundamental components that refer to any open file, running process, or even a help screen. They allow the management of multiple tasks or documents simultaneously without cluttering the workspace with numerous windows or applications.

Key Aspects of Buffer and Window Management:


Emacs provides robust tools for compiling and debugging code, which are essential for tasks such as kernel programming in C. These features streamline the development process by integrating compilation and debugging directly within the editor.

Functionality Command Description
Executing the Compile Command ESC+x compile Initiates the compilation process for the current project or file. Prompts for the compile command, which can be customized as needed (e.g., make for kernel programming).
Navigating Compilation Errors Ctrl+x ` (backtick) Jumps to the next error in the compilation output. Emacs parses the compilation buffer and highlights errors, enabling quick navigation to problematic lines in the source code.
Launching GDB ESC+x gdb Launches the GNU Debugger (GDB) within Emacs, providing an interface to set breakpoints, step through code, inspect variables, and evaluate expressions directly from the editor.
Setting Breakpoints Ctrl+x Ctrl+b Sets a breakpoint at the current line in the source code. Breakpoints allow the debugger to pause execution at specific points, facilitating the inspection of program state.
Stepping Through Code n (next)
s (step)
c (continue)
Executes the next line of code, steps into functions for detailed inspection, and continues execution until the next breakpoint or end of the program.
Inspecting Variables ESC+x gdb-many-windows Opens multiple debugging windows, including source code, assembly, registers, and variable lists, aiding in monitoring the state of variables and program flow during debugging sessions.

Benefits for Kernel Programming:

Emacs' compilation and debugging capabilities make it a powerful tool for kernel programming in C, offering an all-encompassing environment that supports efficient and effective development practices.


(D) Browsing with Emacs: Benefits and How to Navigate

EWW (Emacs Web Wowser) is a built-in web browser in Emacs that allows browsing the web within a text-based environment. Although minimal compared to graphical browsers, EWW provides an efficient means to navigate the web while fully leveraging the keyboard-driven workflow appreciated by many Emacs users.

Functionality ___Command___ Description
Opening a URL ESC+x eww Enter the URL or search term to visit a webpage. EWW will load the page within a buffer.
Navigating Between Pages l (Back)
r (Forward)
g (Reload)
l returns to the previous page, r moves forward in history, and g reloads the current page.
Scrolling 1' Scroll through the page by screen or line increments.
Following Links Enter Position the cursor over a link and press Enter to follow it.
Opening Links in New Buffers Ctrl + Shift + Enter Opens the link in a new buffer, allowing multitasking across several web pages.
Returning to the Home Page h Navigates back to the home page (if set) or the default Emacs home page.
Bookmark a Page b Bookmarks the current page for quick access later without remembering the URL.
View Bookmarks B Lists all bookmarks, allowing direct access to any saved page.
Viewing Browsing History H Displays a list of previously visited pages, navigable with arrow keys or by entering corresponding numbers.
Toggle Images I Toggles the display of images on or off.
Source View 2' Opens the raw HTML source code of the current page in a new buffer.
Change Search Engine 3' Customizes the default search engine used by EWW.

1': Ctrl+v (Page Down), Meta+v (Page Up), Arrow Keys / Ctrl+n (Down) / Ctrl+p (Up)

2': ESC+x eww-view-source

3': Add (setq eww-search-prefix "https://www.google.com/search?q=") to configuration

Benefits of Using EWW:

While EWW does not replace full-featured browsers like Firefox for multimedia-heavy browsing or complex web applications, it offers an efficient, minimalistic browsing experience for those who prefer staying within the Emacs ecosystem and rely on text-based content.


(E) Enhancing File Management with Dired Mode

Dired (Directory Editor) mode in Emacs provides a powerful and interactive method for managing files. It facilitates browsing and manipulating files and directories within the editor, thereby streamlining file system operations.

Functionality ___Command___ Description
Launching Dired ESC+x dired Opens Dired mode, prompting for a directory path. The specified directory is then displayed for file and directory management within Emacs.
File Operations C (Copy)
R (Rename)
D (Delete)
Executes basic file operations such as copying, renaming, and deleting. Can be performed on single or multiple files for batch operations.
Directory Navigation Enter
^
Enter opens the directory or file under the cursor, while ^ moves up one directory level.
Marking Files m (Mark)
u (Unmark)
Marks files for batch operations and unmarks them as needed, allowing multiple files to be acted upon simultaneously.
Opening Files Enter or f Opens the file under the cursor in a new buffer.
Sorting Files s (Sort) Sorts files by various criteria such as name, size, or modification date to enhance file management efficiency.
Recursive Directory Management g (Revert Buffer) Performs recursive operations on files within subdirectories without needing to navigate into each one individually.
Executing Shell Commands ! (Shell Command) Executes shell commands directly from within Dired on selected files, facilitating tasks like batch renaming or compression.

Dired mode transforms Emacs into a comprehensive file management system, providing the necessary tools to handle complex file operations without leaving the editor environment.


(F) Emacs Lisp: Extending Emacs Functionality

Emacs Lisp (Elisp) is the programming language embedded within Emacs, allowing for extensive customization and extension of the editor's capabilities. Emacs Lisp enables the writing of scripts, defining new commands, and creating custom workflows tailored to individual needs.

Custom Key Bindings:

Emacs Lisp can be used to remap existing key bindings or create new ones, enhancing the efficiency of the keyboard-driven workflow. For example, binding a frequently used command to a simpler key combination can streamline operations.

;; Example: Bind F5 to save all buffers
(global-set-key (kbd "<f5>") 'save-some-buffers)  

Automating Tasks:

Repetitive tasks can be automated using Emacs Lisp, reducing the need for manual intervention and minimizing the potential for errors. Automating file operations, text transformations, or buffer management are common applications.

;; Example: Automatically delete trailing whitespace on save
(add-hook 'before-save-hook 'delete-trailing-whitespace)  

Defining New Commands:

Users can define new interactive commands to perform specialized functions, enhancing the editor's functionality to suit specific workflows or projects.

;; Example: Define a command to insert the current date
(defun insert-current-date ()
   "Insert the current date at point."
   (interactive)
   (insert (format-time-string "%Y-%m-%d")))

(global-set-key (kbd "C-c d") 'insert-current-date)  

Creating Custom Modes:

Emacs Lisp allows for the creation of new major or minor modes, providing tailored environments for different programming languages, file types, or project requirements.

;; Example: Define a simple minor mode
(define-minor-mode my-custom-mode
   "A simple custom minor mode."
   :lighter " MyMode"
   :keymap (let ((map (make-sparse-keymap)))
      (define-key map (kbd "C-c m") 'insert-current-date)
       map))

(add-hook 'text-mode-hook 'my-custom-mode)  

Examples of Emacs Lisp Enhancements:

Emacs Lisp empowers users to transform Emacs into a highly personalized and powerful development environment. By leveraging Emacs Lisp, users can tailor Emacs to meet their unique requirements, enhancing productivity and fostering an efficient workflow.


Several command-line switches enhance Emacs' operation, similar to the -nw (no-window) option. These switches provide flexibility in how Emacs is launched, catering to various user needs and preferences.

Switch_Options Description
-q Starts Emacs without loading the initialization file (.emacs or init.el). Useful for troubleshooting configuration issues or starting Emacs with default settings.
--no-splash Launches Emacs without displaying the splash screen, resulting in a cleaner and faster startup experience.
--daemon Runs Emacs in the background as a daemon, allowing subsequent Emacs instances to open more quickly by connecting to the already running process. Particularly beneficial for users who frequently start and stop Emacs sessions.
-batch Executes Emacs in batch mode, without opening the graphical or text interface. Typically used for script execution or automation tasks, enabling Emacs to process files and perform operations without user interaction.
--debug-init Starts Emacs with debugging enabled for the initialization process, aiding in the identification and resolution of errors within startup configuration files.

These switches provide users with the ability to customize the Emacs startup behavior, enhancing the overall user experience by aligning Emacs' operation with specific requirements and use cases.


  1. Ctrl+k: Deletes from the cursor to the end of the line and stores the deleted content in the kill ring (Emacs' clipboard equivalent).
  2. ESC+x compile: Executes the compile command, enabling code compilation within Emacs, which is particularly useful for developers.
  3. ESC+x query-replace: Initiates an interactive find-and-replace operation, prompting for confirmation before each replacement.
  4. ESC+x replace-string: Performs a non-interactive find-and-replace, replacing all occurrences of the specified string.
  5. ESC+x shell: Opens a shell within Emacs, providing access to a command-line interface directly from the editor.
  6. Ctrl+space, ESC+w: Marks a region for copying and then copies the selected text into the kill ring.
  7. Ctrl+y: Pastes (or "yanks") the most recently copied or cut text from the kill ring.
  8. Ctrl+y followed by ESC+y: Cycles through the kill ring, enabling the pasting of previously copied or cut items.
  9. Ctrl+x u: Undoes the most recent changes. This command can be repeated to undo multiple actions.

Advanced string replacement techniques in Emacs 🔍

I. Foundation — built‑in replacement commands

Emacs offers several native commands for interactive or automatic string substitution. The macOS convention is used throughout (⌥ = Meta (M), ⌘ = Super (s)).

  1. Command comparison ⚙️

    Command Scope & confirmation Pattern type Typical keystroke
    query‑replace Interactive, buffer or region Literal M %
    query‑replace‑regexp Interactive, buffer or region Emacs Lisp regexp M ⇧ %
    replace‑string Automatic, buffer or region Literal M‑x replace-string

II. Advanced scenarios

The following examples illustrate practical refactoring patterns and the reasoning behind each step.

  1. Global refactor with captured groups ✨

    M ⇧ %  ^\(defun\s-+\)old_\(.*\)$ RET \1new_\2 RET !

    What happens:

    • ^\(defun\s-+\) captures the function keyword plus its required space into Group 1.
    • old_\(.*\)$ captures the remainder of the symbol (e.g. old_process) into Group 2.
    • The replacement string \1new_\2 rebuilds each definition as (defun new_process …), preserving the original suffix.
    • ! at the prompt answers “yes to all,” executing replacements across the entire buffer without further queries.

    This technique is ideal for systematic API renaming after a naming‑policy change.

  2. Selective region replacement

    1. Activate a region—perhaps an individual function—using C space to mark the start and motion keys (e.g. M >) to mark the end.
    2. Invoke M % (or M ⇧ % for regex) and supply search/replacement terms.
    3. Only occurrences inside the highlighted region are offered for confirmation, providing a safety net against global side effects.

    Regional replacement is particularly useful when refactoring temporary variables inside a long file while leaving other sections untouched.

  3. Backward traversal 🔄

    C r M %

    The prefix C r calls query-replace in reverse, scanning from point toward the beginning of the buffer (BOB). Reverse traversal prevents accidental double replacements when iterating through matches already passed during forward edits.

  4. Embedded newline and tab literals

    M ⇧ %  ,\s-*\\n RET ,\n\t RET !

    Purpose: re‑formatting comma‑separated JSON arrays so that each element begins on a new, indented line.

    • \s-* matches any horizontal whitespace.
    • The search pattern ends with a literal escaped newline \\n, ensuring the match includes the line break itself.
    • The replacement inserts a true newline (\n) followed by a tab (\t) before the next array element.

    The command may be combined with narrowing (C‑x n n) to focus on a JSON block without disturbing surrounding code.

III. Complementary replacement methods

IV. Best‑practice checklist ✅

Written on May 11, 2025


Zsh


Apple’s Transition from Bash to Zsh

In 2019, Apple officially adopted Zsh (Z Shell) as the default shell, starting with macOS Catalina (10.15). This transition marked a significant change from the previously utilized Bash, which had been the default since the inception of macOS. The switch was largely driven by licensing issues and the enhanced features offered by Zsh, making it a more appealing choice for modern developers and power users.

Licensing Issues

Apple's decision to shift from Bash to Zsh was influenced substantially by licensing concerns. Until version 3.2, Bash was licensed under the GNU General Public License v2 (GPLv2), which posed fewer restrictions on redistribution and modification. Apple continued using this version for many years.

However, with the release of Bash 4.0, the license changed to GPLv3, which introduced stricter conditions. Under GPLv3:

By transitioning to Zsh, which is licensed under an MIT-like license, Apple was able to circumvent these issues. This permissive license allowed Apple to include Zsh without the obligation to disclose proprietary modifications, aligning more effectively with Apple’s distribution model.

Advantages of Zsh Over Bash

Apart from addressing licensing concerns, Zsh provided various technical advantages that improved the user experience and rendered it a more suitable choice for Apple’s ecosystem.

1. Permissive Licensing

The MIT-like license associated with Zsh afforded Apple greater flexibility. Unlike GPLv3, it does not impose the requirement to share modifications, permitting Apple to distribute Zsh freely without concerns over proprietary rights.

2. Enhanced Features for Power Users

Zsh offers a range of features that enhance productivity and streamline shell interactions, which are particularly beneficial for developers:

3. User-Configurable Options and Prompt Customization

Zsh supports a broad spectrum of configuration options, enabling users to personalize nearly every aspect of the shell. This includes the capability to create dynamic prompts that display real-time information, contributing to a more informative and engaging terminal experience.

Community and Ecosystem Support

Zsh’s popularity among developers and system administrators has fostered a vibrant community that actively provides resources, such as:

In adopting Zsh as the default shell, Apple aligned with the preferences of a considerable portion of its developer user base. Many developers had already embraced Zsh for its advanced features, and the switch made macOS more intuitive and appealing to this audience.

Security and Maintenance Benefits

Shifting to Zsh also facilitated a departure from the aging Bash 3.2, bringing several advantages in terms of security and maintainability:



Using Wildcards with scp in Zsh

Understanding Zsh’s Globbing Behavior

1. Local vs. Remote Expansion

When utilizing wildcards with scp, it is important to recognize that Zsh may attempt to expand these wildcards locally before executing the command. For example, a command intended to copy all .txt files from a remote server might resemble:

scp user@remote:/path/to/files/*.txt /local/destination/

Zsh might expand *.txt based on the local file system, potentially leading to unintended behavior. This happens because Zsh’s default behavior involves expanding wildcards during the globbing phase, which occurs before the command is executed. If matching files exist in the specified local path, Zsh replaces the wildcard with these files.

2. Why Escaping Wildcards Works

To ensure the wildcard is interpreted on the remote server rather than locally, escaping the wildcard with \* is necessary:

scp user@remote:/path/to/files/\*.txt /local/destination/

Escaping the asterisk directs Zsh to pass the wildcard to scp without local expansion, allowing the remote shell to interpret *.txt and carry out the intended file selection.

Alternative Methods to Control Expansion

Several techniques can prevent Zsh from performing local expansion on wildcards meant for remote servers:

Testing and Troubleshooting with scp

For verification and troubleshooting, the manner in which Zsh interprets a command can be checked by prepending it with echo:

echo scp user@remote:/path/to/files/\*.txt /local/destination/

Alternatively, using the -v option with scp yields verbose output, aiding in the diagnosis of file transfer issues:

scp -v 'user@remote:/path/to/files/*.txt' /local/destination/

Best Practices for Wildcards in Zsh




Setting Up Environment Variables and Frequently Used Commands in Linux and macOS (zsh or bash)

Configuring environment variables and adding aliases or functions for frequently used commands can greatly enhance efficiency in the command-line environment. This guide provides detailed instructions on how to set environment variables temporarily and permanently, both for individual users and system-wide, as well as how to add aliases and functions in zsh or bash shells, applicable to both Linux and macOS systems.


(A) Setting Up Environment Variables Temporarily

To set an environment variable for the current terminal session, use the export command. This change will only persist for the duration of the session and will be cleared once the terminal is closed.

Example: To temporarily set the PYTHONPATH environment variable:

# Temporarily set PYTHONPATH
export PYTHONPATH="/path/to/python/libs"

This sets the PYTHONPATH variable to include the specified directory for the current session.


(B) Setting Up Environment Variables Permanently

To make environment variables persist across sessions, they must be added to the shell's configuration file. For zsh users, this is typically ~/.zshrc; for bash users, it is ~/.bashrc.

Step-by-Step Instructions:

  1. Open the Configuration File
    • For zsh users:
      # Open .zshrc with a text editor
      emacs ~/.zshrc
    • For bash users:
      # Open .bashrc with a text editor
      emacs ~/.bashrc
  2. Add Environment Variables

    For example, to set the PYTHONPATH environment variable permanently:

    # Set PYTHONPATH permanently
    export PYTHONPATH="/path/to/python/libs"

    If using pyenv, it may be necessary to add:

    # Set up pyenv
    export PYENV_ROOT="$HOME/.pyenv"
    export PATH="$PYENV_ROOT/bin:$PATH"
  3. Save and Exit

    Save the file and exit the text editor.

  4. Apply the Changes Immediately
    • For zsh users:
      # Apply changes
      source ~/.zshrc
    • For bash users:
      # Apply changes
      source ~/.bashrc

Note: On macOS, the default shell is zsh (since macOS Catalina). The same steps apply for setting environment variables in zsh on macOS.


(C) Setting Environment Variables System-Wide

For environment variables that should be available to all users on the system, add them to system-wide configuration files. On Linux, these files are /etc/environment, /etc/profile, or /etc/bash.bashrc. On macOS, system-wide configurations for zsh can be added to /etc/zshenv or /etc/zshrc.

Editing System-Wide Configuration Files

  1. Open the Appropriate File with Administrative Privileges
    • On Linux:
      • To edit /etc/environment:
        sudo emacs /etc/environment
      • To edit /etc/profile:
        sudo emacs /etc/profile
    • On macOS (zsh):
      sudo emacs /etc/zshrc
  2. Add Environment Variables

    For example, to set PYTHONPATH globally:

    # Set PYTHONPATH globally
    export PYTHONPATH="/usr/local/lib/python3.9/site-packages"
  3. Save and Apply Changes

    Save the file and exit the text editor.

    To apply the changes, log out and log back in, or source the configuration file. Note that changes to some system-wide files may require a system reboot or re-login to take effect.



(D) Adding Frequently Used Commands with Aliases and Functions

Aliases and functions allow for efficient command reuse and can be added to the shell's configuration files.

D-1) Adding Aliases

Aliases are shortcuts for commands. To add aliases:

  1. Open the Configuration File
    • For zsh users:
      # Open .zshrc. If absent, use .zprofile
      emacs ~/.zshrc
    • For bash users:
      # Open .bashrc
      emacs ~/.bashrc
  2. Add Alias Definitions
    # Alias to compress and split a folder
    alias compress_folder='FOLDER="folder_name" && tar -zcvf - "$FOLDER" | split -b 4480M - "${FOLDER}.tar.gz."'
    
    # Alias to search a specific folder for a pattern
    alias grep_designated_folder='find /path/to/designated/folder -type f -print0 | xargs -0 grep -i "###" 1> tmp1 2> tmp2'
    • compress_folder: Compresses and splits a folder into chunks.
    • grep_designated_folder: Searches a specific folder for a pattern.
  3. Save and Apply Changes
    # Apply alias changes for zsh
    source ~/.zshrc
    
    # Apply alias changes for zsh when using .zprofile
    source ~/.zprofile
    
    # Apply alias changes for bash
    source ~/.bashrc

D-2) Adding Functions

Functions provide more flexibility with parameters than aliases. To add functions:

Add Function Definitions

# Function to compress a folder with a given name
compress_folder() {
   FOLDER="$1"
   tar -zcvf - "$FOLDER" | split -b 4480M - "${FOLDER}.tar.gz."
}

# Function to search a specified folder for a given pattern
grep_designated_folder() {
   find "$1" -type f -print0 | xargs -0 grep -i "$2" 1> tmp1 2> tmp2
}

D-3) My Own .bashrc or .zprofile

alias gonginx="cd /opt/homebrew/etc/nginx/"
alias gohttp="cd /opt/homebrew/var/www/"

cd /opt/homebrew/var/www    

function tar_backup_prototype() {
    cd /opt/homebrew/var || return
    filename="WEB$(date +"%Y%m%d")"
    tar -zcvf "${filename}.tar.gz" www/
    cd /opt/homebrew/var/www || return    
}

function tar_backup() {
    cd /opt/homebrew/var || return

    # Initial base filename
    base_filename="WEB$(date +"%Y%m%d")"
    filename="${base_filename}.tar.gz"

    # Check if the filename already exists, and append a counter if necessary
    counter=1
    while [ -e "$filename" ]; do
        filename="${base_filename}_${counter}.tar.gz"
        counter=$((counter + 1))
    done

    # Create the tar archive with the unique filename
    tar -zcvf "$filename" www/

    # Return to the specified directory
    cd /opt/homebrew/var/www || return
}
    

function tar_web() {
    # Check if a filename argument is provided
    if [ -z "$1" ]; then
        echo "Usage: tar_web "
        return 1  # Exit the function with a non-zero status
    fi

    # Navigate to the specified directory or exit if it fails
    cd /opt/homebrew/var || return

    # Create the tar.gz archive with the provided filename
    tar -zcvf "${1}.tar.gz" www/

    # Navigate back to the www directory or exit if it fails
    cd /opt/homebrew/var/www || return
}
function scp_backup_today() {
    scp "ngene.org:/opt/homebrew/var/WEB$(date +"%Y%m%d")*.tar.gz" ~/Desktop/
}

scp2web() {
  local filename="$1"
  scp "${filename}"* ngene.org:/opt/homebrew/var/www/
}
eval "$(/opt/homebrew/bin/brew shellenv)"
export PATH="/opt/homebrew/sbin:$PATH"
    
alias zprofile_change='emacs ~/.zprofile'
alias zprofile_apply='source ~/.zprofile'

# Function to search for a specific term within files under a specified directory, case-insensitive.
function file_grep() {
    # Check if both search term and search path are provided
    if [[ -z "$1" || -z "$2" ]]; then
        echo "Usage: file_grep  "
        echo "Example: file_grep \"nginx\" /opt/homebrew"
        return 1
    fi

    # Assign arguments to variables for clarity
    search_term="$1"
    search_path="$2"

    # Execute the search command
    sudo find "$search_path" -type f -print0 | xargs -0 grep -i "$search_term"
}

# Function to search files by regex in a specified directory (case-insensitive)
function find_re() {
    # Display usage instructions if arguments are missing
    if [[ -z "$1" || -z "$2" ]]; then
        echo "Usage: find_re  "
        echo "Example: find_re /opt/homebrew '.*frank.*'"
        return 1
    fi

    # Assign arguments to variables for readability
    search_path="$1"
    regex_pattern="$2"

    # Execute the find command with case-insensitive regex
    find "$search_path" -type f -iregex "$regex_pattern"
}

# Function to search for files with a case-insensitive substring match in the filename
function find_str() {
    # Display usage instructions if arguments are missing
    if [[ -z "$1" || -z "$2" ]]; then
        echo "Usage: find_str  "
        echo "Example: find_str /opt/homebrew '###'"
        return 1
    fi

    # Assign arguments to variables for clarity
    search_path="$1"
    search_text="$2"

    # Execute the find command with case-insensitive name matching
    find "$search_path" -type f -iname "*$search_text*"
}

##################################
alias emacs="emacs -nw"

(E) Using Environment Variables, Aliases, and Functions

Once the environment variables, aliases, or functions are set in the shell's configuration file, they become available in every new terminal session.

Using the compress_folder Function: To compress and split a folder named folder_name, run:

# Compress and split a folder
compress_folder folder_name

Automatic Environment Variables: The PYTHONPATH variable will be automatically set upon opening a new terminal, allowing Python to locate additional libraries specified in the path.


Advanced Unix-Like Command Explanatory


Collections of Unix-Like Commands

(A) Advanced Listing with ls

Recursive Listing (-R)

The ls -R command lists all files and directories recursively. This means it will display the contents of the current directory and all subdirectories, which is useful for viewing a complete directory structure.

Human-Readable Sizes (-lh)

The ls -lh command displays file sizes in a human-readable format, showing sizes in kilobytes (KB), megabytes (MB), or gigabytes (GB), as appropriate. This makes it easier to understand file sizes at a glance compared to the default byte-based format.

Detailed Listing with Time Sorting (-lart)

The ls -lart command combines several options to provide an advanced view of files:

This combination is particularly useful when reviewing a directory’s history, starting with the oldest files or hidden system files.

Sorting by File Size in Reverse (-lSr)

The ls -lSr command lists files by size in ascending order:

This method is useful when the focus is on analyzing smaller files first, which can be helpful in managing storage space efficiently.

Displaying Output with Colors (--color=auto)

The ls --color=auto command adds color to the output, distinguishing files, directories, and symbolic links by color. This visual enhancement simplifies identification of different types of files within the terminal.


(B) Deleting Files with rm

Using Quotes for Files with Spaces

When a file name contains spaces, quotes are necessary to ensure the shell interprets it correctly. For example:

rm "file with spaces.txt"

In this case, either single or double quotes can be used to handle the spaces in the file name properly.

Escaping Spaces

An alternative method for handling file names with spaces is to escape each space with a backslash (\):

rm file\ with\ spaces.txt

This approach is particularly effective when dealing with multiple files or file names containing special characters directly in the terminal.

Preventing Critical Deletions (--preserve-root)

The rm -rf --preserve-root command adds an extra safeguard, ensuring the root directory (/) is never deleted. This is crucial to prevent accidental system-wide deletion.

Deleting Files by Pattern (rm !(*.txt))

The rm !(*.txt) command deletes all files except those matching a specific pattern, such as text files. This requires enabling extglob with the command:

shopt -s extglob

This method provides control over batch deletion while protecting specific file types.


(C) Enhancing Copy and Move Commands

cp – Enhanced Copying

The cp command offers several enhancements:

For more efficient copying, consider using rsync:

rsync -avh source destination

rsync is optimized for large transfers, preserving permissions and utilizing compression.

mv – Moving with Precision

The mv command can be used with additional options:


(D) Advanced Searching and File Management

find – Complex Search

awk – Advanced Text Processing

Combined with process monitoring, the following command lists users and processes consuming more than 50% CPU:

ps aux | awk '$3 > 50 {print $1, $3, $11}'

sed – Streamlined Text Editing


(E) Disk Usage and Process Monitoring

df and du – Disk Usage Analysis

ps and top – Process Monitoring


(F) Networking and Efficient Command Chaining

netstat and ss – Network Monitoring

xargs – Efficient Command Chaining


(G) Useful Aliases and Synchronization

alias – Command Shortcuts

Setting an alias in ~/.bashrc or ~/.zshrc helps reduce repetitive typing:

alias proj="cd /home/user/projects"
alias ll="ls -lh"
alias cls="clear"  

rsync – Smart Synchronization

These advanced commands and techniques offer powerful control over file management, system monitoring, and data transfers in Linux.


Searching Through Files for Text

(A) grep -r "###" /path/to/designated/folder

This command searches recursively for the string "###" in all files and subdirectories under the specified directory.

Limitations:


(B) grep -rl "###" /path/to/designated/folder

This command functions similarly to the first, with the addition of the -l flag, which modifies the output to display only filenames containing the matching text, without showing the matching lines.

Advantages:

Limitations:


(C) find /path/to/designated/folder -type f | xargs grep -i "###" 1> tmp1 2> tmp2

This command begins by using find to list all files, then pipes the results to grep via xargs, which searches for the specified string "###".

Advantages:

Limitations:


(D) find /path/to/designated/folder -type f -print0 | xargs -0 grep -i "###" 1> tmp1 2> tmp2

This command enhances the previous one by handling filenames that contain spaces or special characters more effectively.

Advantages:

Limitations:


.

Locating Installed Command-Line Tools on macOS: find "$(brew --prefix)" -name [tool_name] -type f

This document explains how to determine the installation path of a command-line tool on macOS. The instructions focus on scenarios involving Homebrew installations, though they can be adapted to any other method of tool installation. Every step and detail is preserved from earlier discussions but rearranged and generalized to maintain a high level of clarity and professionalism.

Purpose of the find "$(brew --prefix)" -name [tool_name] -type f Command

This command is a convenient way to locate a specific file—such as a binary for a tool—within the directory structure managed by Homebrew. The example below uses [tool_name] as a placeholder; substitute the actual tool’s name (e.g., ffmpeg, git, or any other executable).

find "$(brew --prefix)" -name [tool_name] -type f
  1. brew --prefix: Returns the base directory where Homebrew is installed. On Apple Silicon (M1/M2) systems, this is commonly /opt/homebrew; on Intel-based Macs, /usr/local; some custom Homebrew setups may differ.
  2. Substitution with $(...): When the shell processes brew --prefix inside $(...), it replaces that portion of the command with the actual path (e.g., /opt/homebrew), resulting in:
    find "/opt/homebrew" -name [tool_name] -type f
  3. find "$(brew --prefix)": Instructs the find utility to begin searching in the Homebrew prefix directory returned by brew --prefix.
  4. -name [tool_name]: Tells find to look for files named exactly [tool_name].
  5. -type f: Restricts the search to regular files, excluding directories, symlinks, or other file types.

Outcome: The command scans Homebrew’s installation tree for files named [tool_name] and helps pinpoint the exact location of the installed binary.

Searching Beyond Homebrew

Some tools may not have been installed using Homebrew, or they may be installed in a location not covered by the Homebrew directory structure. In such cases, the following approaches can be used:

  1. Quick Checks with which or command -v

    which [tool_name]

    or

    command -v [tool_name]

    If [tool_name] is found in the PATH, these commands return the absolute path (for example, /usr/local/bin/[tool_name]). If nothing is returned, the tool is not on the system’s PATH.

  2. Comprehensive System-Wide Search

    If the exact location remains unknown, it may be necessary to search the entire file system:

    sudo find / -name [tool_name] -type f 2>/dev/null
    • /: Starts the search from the root directory.
    • 2>/dev/null: Redirects error messages (such as permission denials) to /dev/null, creating a cleaner output.

    This approach can take significantly longer than searching only the Homebrew prefix because it scans every accessible directory on the system.

Benefits of Using find "$(brew --prefix)" -name [tool_name] -type f

Additional Notes about the find Utility

  1. No Extra Installation Required: The find command is included by default on macOS (as well as most Linux and Unix-like systems). There is no need to install additional software to run find.
  2. Flexibility in Search Scope: If a tool is installed manually or obtained from a different package manager, it is still possible to use find by directing it to a suspected location or by conducting a system-wide search.
  3. Applicability to Any Tool: Although the examples here often reference media-processing utilities (e.g., ffmpeg), the same syntax applies to any file name. Substitute [tool_name] to locate other binaries or resources.

Written on February 12, 2025


Managing Processes in Linux and macOS

Managing processes is a fundamental aspect of system administration in both Linux and macOS environments. Understanding how to check, search for, and control processes is essential for maintaining system performance and stability. This guide provides detailed instructions on managing processes, incorporating tools and commands available in both Linux and macOS.


(A) Checking Processes

A-1) Using the ps Command

The ps (process status) command provides a snapshot of current processes.

A-2) Using the top Command

The top command provides a dynamic, real-time view of running processes.

Note: On macOS, top has some differences in options and display.

A-3) Using the htop Command

htop is an interactive process viewer with a user-friendly interface.

A-4) Using the pstree Command

Displays processes in a tree format, showing parent-child relationships.


(B) Searching for Specific Processes

B-1) Using pgrep

Searches for processes based on name and other attributes.

B-2) Using ps with grep

Filters the list of processes to find specific ones.

B-3) Filtering in top and htop


(C) Identifying Problematic Processes

C-1) High Resource Usage

Processes consuming excessive CPU or memory can degrade system performance.

C-2) Unresponsive or Zombie Processes

Processes that are not functioning correctly or have become defunct.


(D) Analyzing Open Files and Network Connections

D-1) Using the lsof Command

Lists open files and the processes that opened them.

D-2) Using the netstat Command

Displays network-related information.


(E) Killing Specific Processes

E-1) Using the kill Command

Sends a signal to a process to terminate it.

E-2) Using the killall Command

Terminates processes by name rather than PID.

Note: On macOS, killall targets processes by their full command name as displayed in ps or top. The command is case-sensitive and requires the exact process name.

E-3) Using the pkill Command

Similar to pgrep, but sends signals to processes.

E-4) Killing Processes in htop


(F) Monitoring System Performance

F-1) Using vmstat

Reports virtual memory statistics.

vmstat 2 5

Note: On macOS, use vm_stat (with an underscore).

vm_stat

F-2) Using iostat

Reports CPU and input/output statistics.

iostat 2 5

F-3) Using sar (Linux Only)

Collects, reports, or saves system activity information.

sar -u 2 5

(G) Best Practices and Security

G-1) Verify Before Termination

Ensuring the correct process is targeted helps prevent unintended system behavior.

G-2) Use Signals Appropriately

G-3) Consider Process Hierarchies

G-4) Monitor System Logs

G-5) Permissions and Security

Some processes require root privileges to manage. Operate with the least privilege necessary.

By understanding and utilizing these tools and commands, processes can be effectively managed in both Linux and macOS environments, ensuring optimal system performance and stability.




Accessing and Mounting External Drives

In Unix-like environments and macOS, external drives such as SD cards or USB disks are typically mounted in specific directories, making them accessible from the command line. These drives may be automatically mounted in designated directories, or manual mounting can be employed for greater control.


(A) Accessing External Drives on macOS

On macOS, external drives are automatically mounted in the /Volumes directory. Each drive appears as a folder within this directory, named according to the drive’s label, allowing for organized and predictable access.

cd /Volumes
ls

After navigating to /Volumes, using the ls command lists the mounted drives. For instance, if an SD card is labeled "SDCARD," access it directly by specifying the drive’s path:

cd /Volumes/SDCARD

(B) Accessing External Drives on Linux (Unix-like)

In most Linux distributions, external drives are generally mounted in either /media or /mnt, with specific mounting practices depending on distribution and user configuration:

Automatic Mounting

Drives are usually mounted automatically in /media/username/DRIVENAME, where username represents the logged-in user.

cd /media/username/DRIVENAME

Manual Mounting

For manual mounting, /mnt is commonly used as a directory for temporary mounts. This process requires the use of the mount command.

sudo mount /dev/sdX1 /mnt
cd /mnt

Replace /dev/sdX1 with the correct device name for the external drive. For example, /dev/sdb1 often denotes the first partition on a USB disk.


(C) Using the mount Command for Manual Mounting

The mount command provides flexibility for mounting devices, allowing access to a variety of filesystems and external storage. The command is structured as follows:

sudo mount -o options device mount_point

Example Commands

1. Mounting a USB Drive Manually: To mount a USB drive (e.g., /dev/sdb1) to /mnt/usb, use:

sudo mount /dev/sdb1 /mnt/usb

Before accessing the device, ensure that the /mnt/usb directory exists, creating it if necessary:

sudo mkdir -p /mnt/usb

2. Mounting an ISO File as a Loop Device: ISO files are often mounted as loop devices, making their files accessible without burning them to physical media. This example mounts an ISO file as a read-only loop device using the iso9660 filesystem type:

sudo mount -o loop,ro -t iso9660 /path/to/file.iso /mnt/iso

3. Unmounting a Device: To safely remove a mounted device, unmount it using the umount command:

sudo umount /mnt/usb

Ensuring the device is unmounted before physically disconnecting it helps prevent data loss or corruption.

These practices allow for flexible and efficient management of external drives and ISO files across Unix-like environments, providing consistent access through automatic and manual mounting techniques.


Viewing Shell History Using Emacs: emacs ~/.zsh_history

The following instructions describe the process for accessing the complete shell command history using Emacs. The procedure is outlined in a systematic manner, providing details for both bash and zsh shells.

  1. Determine the Active Shell

    A shell history file must be identified before proceeding:

    • bash: The history is typically stored in the file ~/.bash_history
    • zsh: The history is usually stored in the file ~/.zsh_history
  2. Flush the Current Session’s History

    Before opening the history file, it is advisable to ensure that the session’s command history is fully written to the history file.

    Shell Command Description
    bash history -a Appends the session's recent commands to the history file.
    zsh fc -W Writes the current session's history to the history file.

    Execute the corresponding command in the terminal to update the history file.

  3. Open the History File in Emacs

    Once the history file is updated, Emacs can be used to view and search the command history. Launch Emacs with the appropriate file as follows:

    • For bash: emacs ~/.bash_history
    • For zsh: emacs ~/.zsh_history

    Opening the file in Emacs allows navigation, search, and editing of the complete command history.

Written on April 1, 2025


Debian OS from macOS


Downloading Debian ISO Using Jigdo on macOS

This guide provides detailed instructions for downloading Debian ISO files using Jigdo on a macOS system. The steps are organized to ensure clarity and efficiency, addressing potential challenges that may arise during the process.


(A) Install Jigdo via Homebrew

Homebrew serves as a package manager for macOS, facilitating the installation of various software packages, including Jigdo.

A-1) Install Jigdo:

brew install jigdo

Execute this command in the Terminal to install Jigdo.

A-2) Verify Jigdo Installation:

brew list jigdo

Confirm that Jigdo has been installed correctly. Typical output includes executable files located in /opt/homebrew/Cellar/jigdo/0.8.2/bin/, such as:


To utilize Jigdo commands seamlessly, ensure that Homebrew’s binary directory is included in the system’s PATH.

B-1) Check PATH Inclusion:

echo $PATH

Verify if /opt/homebrew/bin is part of the PATH. If not present, proceed to update the PATH.

B-2) Update the PATH:

Add Homebrew’s binary directory to the PATH by editing the shell configuration file (~/.zshrc for Zsh or ~/.bashrc for Bash).

For Zsh users:

echo 'export PATH="/opt/homebrew/bin:$PATH"' >> ~/.zshrc
source ~/.zshrc

For Bash users:

echo 'export PATH="/opt/homebrew/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc

B-3) Verify Executable Access:

which jigdo-lite

A valid path, such as /opt/homebrew/bin/jigdo-lite, indicates successful configuration.

B-4) Link Jigdo Manually (If Necessary):

If jigdo-lite is still not found, manual linking may be required. Execute the following commands:

brew unlink jigdo && brew link jigdo
which jigdo-lite


(C) Download and Prepare Debian Jigdo Files

To reconstruct Debian ISO files, both .jigdo and .template files are required. These files provide the necessary information and structure for the ISO assembly.

C-1) Download the .jigdo and .template Files:

Navigate to the Debian Jigdo DVD Images. Download the corresponding .jigdo and .template files and save them in ~/Desktop/Debian/.


(D) Execute Jigdo to Download and Assemble the Debian ISO

With the necessary Jigdo files in place, proceed to download and assemble the Debian ISO.

D-1) Navigate to the Jigdo Files Directory:

cd ~/Desktop/Debian/

D-2) Run Jigdo-lite for a Single ISO:

jigdo-lite debian-12.7.0-i386-DVD-1.jigdo

D-3) Batch Download Multiple ISOs:

For scenarios involving multiple Jigdo and template files (e.g., 21 ISOs), scripting can automate the download process. Below are two versions of the Zsh script, each with different approaches for automating responses to Jigdo prompts.

For Zsh (using yes for continuous "Enter" presses):

#!/bin/zsh

# Directory containing all .jigdo and .template files
JIGDO_DIR=~/Desktop/Debian/

# Navigate to the Jigdo directory
cd "$JIGDO_DIR" || exit

# Loop through all .jigdo files and execute jigdo-lite, with continuous "Enter" key presses
for jigdo_file in *.jigdo; do
   echo "Processing $jigdo_file..."

   # Use 'yes' to simulate continuous "Enter" presses for each file
   yes '' | jigdo-lite "$jigdo_file"
done

This version of the script employs the yes command to repeatedly send an empty string, simulating continuous "Enter" presses until all Jigdo prompts are satisfied. This method is useful if the number of prompts varies or if additional confirmations are required during the download process.

For Zsh (using printf for exactly two "Enter" presses):

#!/bin/zsh

# Directory containing all .jigdo and .template files
JIGDO_DIR=~/Desktop/Debian/

# Navigate to the Jigdo directory
cd "$JIGDO_DIR" || exit

# Loop through all .jigdo files and execute jigdo-lite, with exactly two "Enter" key presses
for jigdo_file in *.jigdo; do
   echo "Processing $jigdo_file..."

   # Use 'printf' to simulate pressing "Enter" twice for each file
   printf '\n\n' | jigdo-lite "$jigdo_file"
done

This version of the script uses printf to send exactly two newline characters, simulating two "Enter" key presses. It is beneficial when only two prompts are expected, as it avoids continuous input and provides controlled interaction with the Jigdo process.

#!/bin/bash

#Directory containing all .jigdo and .template files
JIGDO_DIR=~/Desktop/Debian/

# Navigate to the Jigdo directory
cd "$JIGDO_DIR" || exit

# Loop through all .jigdo files and execute jigdo-lite, with continuous "Enter" key presses
for jigdo_file in *.jigdo; do
   echo "Processing $jigdo_file..."

   # Use 'yes' to simulate continuous "Enter" presses for each file
   yes '' | jigdo-lite "$jigdo_file"
done
#!/bin/bash

# Directory containing all .jigdo and .template files
JIGDO_DIR=~/Desktop/Debian/

# Navigate to the Jigdo directory
cd "$JIGDO_DIR" || exit

# Loop through all .jigdo files and execute jigdo-lite, with exactly two "Enter" key presses
for jigdo_file in *.jigdo; do
   echo "Processing $jigdo_file..."

   # Use 'printf' to simulate pressing "Enter" twice for each file
   printf '\n\n' | jigdo-lite "$jigdo_file"
done

Choose the script version based on the expected interaction with Jigdo prompts. The yes command version is suitable for continuous responses, while the printf version provides a precise number of responses.

D-4) Execute the Script:

chmod +x ~/Desktop/download_isos.zsh
~/Desktop/download_isos.zsh

Replace with download_isos.sh for Bash.




Creating a Bootable SD Card for Debian Installation Using Windows and macOS

To facilitate Debian installation from an SD card, the ISO file must be properly written to the SD card. This guide provides a brief overview for Windows users utilizing Rufus, followed by detailed steps for macOS users using built-in tools.


Windows Instructions

Download Rufus from the official website. Rufus is a straightforward tool for creating bootable media from ISO files on Windows.


macOS Instructions (Using Built-In Tools)

This section provides detailed steps to create a bootable Debian SD card on macOS without third-party software.

(A) Identify the SD Card in Terminal:

Insert the SD card and open Terminal. Use the following command to display all drives and identify the SD card by its size:

diskutil list

Note the identifier for the SD card (such as /dev/disk9).

(B) Unmount and Erase the SD Card:

Unmount the SD card with the command:

diskutil unmountDisk /dev/disk9

Next, erase and format the SD card:

sudo diskutil eraseDisk FAT32 BOOT MBRFormat /dev/disk9

The command eraseDisk initiates the process of removing all existing data on the SD card. The FAT32 parameter specifies the filesystem to be used, ensuring compatibility across various operating systems. The label BOOT names the new partition, and MBRFormat sets the partition scheme to Master Boot Record, which is suitable for booting purposes.

/dev/disk0 (internal, physical):
#:                       TYPE NAME                    SIZE       IDENTIFIER
0:      GUID_partition_scheme                        *2.0 TB     disk0
1:             Apple_APFS_ISC Container disk1         524.3 MB   disk0s1
2:                 Apple_APFS Container disk3         2.0 TB     disk0s2
3:        Apple_APFS_Recovery Container disk2         5.4 GB     disk0s3

/dev/disk3 (synthesized):
#:                       TYPE NAME                    SIZE       IDENTIFIER
0:      APFS Container Scheme -                      +2.0 TB     disk3
                              Physical Store disk0s2
1:                APFS Volume Macintosh HD - Data     807.5 GB   disk3s1
2:                APFS Volume Macintosh HD            10.8 GB    disk3s3
3:              APFS Snapshot com.apple.os.update-... 10.8 GB    disk3s3s1
4:                APFS Volume Preboot                 12.3 GB    disk3s4
5:                APFS Volume Recovery                1.9 GB     disk3s5
6:                APFS Volume VM                      2.1 GB     disk3s6

/dev/disk4 (disk image):
#:                       TYPE NAME                    SIZE       IDENTIFIER
0:      GUID_partition_scheme                        +10.4 GB    disk4
1:                 Apple_APFS Container disk5         10.4 GB    disk4s1

/dev/disk5 (synthesized):
#:                       TYPE NAME                    SIZE       IDENTIFIER
0:      APFS Container Scheme -                      +10.4 GB    disk5
                              Physical Store disk4s1
1:                APFS Volume watchOS 10.5 21T575 ... 10.1 GB    disk5s1

/dev/disk6 (disk image):
#:                       TYPE NAME                    SIZE       IDENTIFIER
0:      GUID_partition_scheme                        +17.6 GB    disk6
1:                 Apple_APFS Container disk7         17.6 GB    disk6s1

/dev/disk7 (synthesized):
#:                       TYPE NAME                    SIZE       IDENTIFIER
0:      APFS Container Scheme -                      +17.6 GB    disk7
                              Physical Store disk6s1
1:                APFS Volume iOS 17.5 21F79 Simul... 17.0 GB    disk7s1

/dev/disk9 (internal, physical):
#:                       TYPE NAME                    SIZE       IDENTIFIER
0:     FDisk_partition_scheme                        *32.0 GB    disk9
1:             Windows_FAT_32 bootfs                  268.4 MB   disk9s1
2:                      Linux                         31.7 GB    disk9s2

The output of the diskutil list command provides a detailed view of the storage devices connected to the MacBook. Below is an interpretation of each entry:

B-1) /dev/disk0 (internal, physical):

This device, with a capacity of 2.0 TB, represents an internal and physical disk, meaning it is permanently installed within the MacBook rather than being a removable or virtual drive. The disk utilizes the GUID Partition Scheme and contains the following partitions:

B-2) /dev/disk3 (synthesized):

This entry signifies a synthesized APFS container generated by macOS, encompassing various volumes related to /dev/disk0. It includes multiple essential system partitions such as:

B-3) /dev/disk4 to /dev/disk7 (disk images and synthesized):

These entries correspond to disk images, likely representing mounted virtual drives or other macOS system images:

B-4) /dev/disk9 (internal, physical):

This 32.0 GB device is marked as both "internal" and "physical," indicating a physically removable medium recognized as part of the MacBook's internal hardware interface, such as an SD card slot. It employs an FDisk partition scheme, commonly associated with devices formatted for compatibility across various operating systems. The two partitions present are:

The identifying characteristics of /dev/disk9—a size of 32.0 GB, an FDisk partition scheme, and a removable nature—indicate this is the SD card. Such removable drives are recognized as "internal, physical" due to their connection through a built-in card reader or slot, contrasting with virtual or purely internal SSDs and HDDs that are non-removable.

(C) Convert the ISO to IMG Format:

macOS requires the ISO to be in .img format for the dd utility. Convert the Debian ISO by running:

hdiutil convert -format UDRW -o ~/Desktop/debian.img ~/Desktop/debian-12.7.0-arm64-DVD-1.iso

The hdiutil command-line tool is utilized for working with disk images in macOS. The UDRW format specifies an uncompressed read/write image, which is necessary for the subsequent dd operation. The output location is set to ~/Desktop/debian.img, and the source file is ~/Desktop/debian-12.7.0-arm64-DVD-1.iso.

If macOS appends .dmg to the output file (resulting in debian.img.dmg), this extension remains acceptable for the next step.

(D) Write the IMG to the SD Card Using dd:

Use dd to transfer the IMG file to the SD card. The dd utility performs a low-level copy of data from one location to another. The parameters used are:

The command to execute is:

sudo dd if=~/Desktop/debian.img.dmg of=/dev/disk9 bs=1m

Prior to executing this command, it is necessary to unmount the SD card using diskutil unmountDisk /dev/disk9. This ensures that no other processes are accessing the disk, preventing potential data corruption during the write operation.

Be cautious when using dd, as it can overwrite any specified drive without warning. This process can take a few minutes; no progress is shown by default.

(E) Eject the SD Card:

Once the process is complete, safely eject the SD card with:

diskutil eject /dev/disk9

The SD card is now prepared as a bootable Debian installation medium.




Configuring Debian to Use Local ISO Repositories with Fallback to Online Sources

Configuring Debian to utilize local ISO files as repositories enhances package management efficiency, particularly in environments with limited or unreliable internet connectivity. This guide outlines the process of mounting multiple ISO files, updating the package manager’s sources list to prioritize local repositories, and automating the mounting process for sustained convenience.


(A) Prepare Mount Directories

A fundamental step involves creating directories designated for mounting each ISO file. Organizing these directories under /media maintains system orderliness.

sudo mkdir -p /media/debian-iso{1..21}

(B) Mount ISO Files Using a Bash Script

Automating the mounting process ensures efficiency when handling multiple ISO files. A Bash script is employed to mount each ISO to its corresponding directory as a loop device.

B-1) Create the Mount Script:

emacs ~/mount_debian_isos.sh -nw
#!/bin/bash

# Base directory where ISO files are located
ISO_DIR=/home/frank/Downloads
MOUNT_DIR=/media

# Loop through all 21 ISOs and mount them
for i in {1..21}; do
    ISO_FILE="$ISO_DIR/debian-12.7.0-arm64-DVD-$i.iso"
    MOUNT_POINT="$MOUNT_DIR/debian-iso$i"

    if [ -f "$ISO_FILE" ]; then
        echo "Mounting $ISO_FILE to $MOUNT_POINT..."
        sudo mount -o loop,ro "$ISO_FILE" "$MOUNT_POINT"
    else
        echo "Warning: $ISO_FILE does not exist."
    fi
done

B-2) Run the Script:

chmod +x ~/mount_debian_isos.sh
~/mount_debian_isos.sh    

The mount command is versatile and used across various scenarios, from mounting ISO files to accessing network drives and USB devices. Below are several common examples of the mount command, demonstrating frequently used options and configurations:

1. Mount an ISO File as a Loop Device: This example mounts an ISO file as a read-only loop device using the ISO 9660 filesystem type.

sudo mount -t iso9660 -o loop /home/frank/Downloads/debian-9.5.0-amd64-DVD-1.iso /media/d1

2. Mount a USB Drive Automatically: Linux often automatically recognizes and mounts USB drives to /media/username/DRIVENAME. However, manual mounting is also possible:

sudo mount /dev/sdb1 /mnt/usb

3. Mount a Windows NTFS Drive: For dual-boot systems, accessing Windows partitions from Linux may require specifying the NTFS filesystem type.

sudo mount -t ntfs-3g /dev/sda1 /mnt/windows

4. Mount a Network Share (NFS): Network File System (NFS) is widely used for accessing remote file systems across a network.

sudo mount -t nfs 192.168.1.100:/shared-folder /mnt/nfs

5. Mount a CIFS (Windows/Samba) Network Share: CIFS (Common Internet File System) is a network protocol that allows access to shared folders from Windows or Samba servers.

sudo mount -t cifs -o username=frank,password=yourpassword //192.168.1.101/shared-folder /mnt/cifs

6. Mount a Disk Partition as Read-Only: For forensic or data recovery purposes, mounting a partition in read-only mode prevents any accidental modifications.

sudo mount -o ro /dev/sdc1 /mnt/readonly

7. Mount a Bind Directory (Make One Directory Accessible at Another Path): The bind option allows one directory to be mounted at another path, effectively mirroring its contents.

sudo mount --bind /var/www/html /mnt/website

(C) Update sources.list to Include Local and Online Repositories

Configuring the APT package manager to prioritize local ISO repositories while retaining the ability to access online sources involves editing the sources.list file. The order of entries dictates the priority, with earlier entries being preferred.

C-1) Edit sources.list:

sudo emacs /etc/apt/sources.list

C-2) Add Repository Entries:

Append the following lines to the end of the file. Ensure that bookworm is the correct codename for the Debian release in use. Adjust accordingly if a different release is active.

# Local Debian ISO repositories
deb [trusted=yes] file:/media/debian-iso1/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso2/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso3/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso4/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso5/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso6/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso7/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso8/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso9/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso10/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso11/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso12/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso13/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso14/ bookworm main contrib	
deb [trusted=yes] file:/media/debian-iso15/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso16/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso17/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso18/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso19/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso20/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso21/ bookworm main contrib

# Online Debian repositories
deb http://deb.debian.org/debian bookworm main contrib
deb-src http://deb.debian.org/debian bookworm main contrib

deb http://deb.debian.org/debian-security bookworm-security main contrib
deb-src http://deb.debian.org/debian-security bookworm-security main contrib

deb http://deb.debian.org/debian bookworm-updates main contrib
deb-src http://deb.debian.org/debian bookworm-updates main contrib

(D) Update the Package List

Refreshing the APT package database allows the system to recognize the newly configured repositories.

sudo apt update

This command updates the package index, enabling APT to acknowledge packages available from both local ISO repositories and online sources.


(E) Install Packages from Repositories

Packages can now be installed using APT, with the system prioritizing local ISO repositories before consulting online sources.

sudo apt install <package_name>

Replace <package_name> with the desired package to install.

apt-get install build-essential linux-headers-$(uname -r)
	
# Raspberry Pi
apt-get install build-essential gcc raspberrypi-kernel-headers raspberrypi-kernel
apt-get install build-essential emacs raspberrypi-kernel-headers git bc bison flex libc6-dev libncurses5-dev make
sudo apt install crossbuild-essential-armhf    

(F) Automate ISO Mounting on Startup (Optional)

To ensure that all ISO files are mounted automatically upon system boot, entries can be added to the /etc/fstab file. This guarantees that the local repositories are available without manual intervention each time the system starts.

F-1) Edit /etc/fstab:

sudo emacs /etc/fstab

F-2) Add Mount Entries:

Append the following lines to the end of the file.

# Mount Debian ISO repositories
~/Downloads/debian-12.7.0-i386-DVD-1.iso /media/debian-iso1 iso9660 loop,ro 0 0
~/Downloads/debian-12.7.0-i386-DVD-2.iso /media/debian-iso2 iso9660 loop,ro 0 0
# ...
~/Downloads/debian-12.7.0-i386-DVD-21.iso /media/debian-iso21 iso9660 loop,ro 0 0

F-3) Reload the Filesystem Mounts:

sudo mount -a

This command mounts all filesystems specified in /etc/fstab without necessitating a system reboot.



Linux, for Kernel-Level Device Driver


Linux Commands for Kernel and Device Programmers

1. System Information and Architecture

Before diving into device and kernel programming, it is crucial to understand the core system information. The following commands offer insights into the Linux system's hardware, kernel, and architecture.

1.1 System Information

The hostnamectl command provides a basic overview of the system, including the hostname, operating system, and kernel version.

# System Information
hostnamectl 

1.2 Architecture and Kernel Information

The uname command displays kernel and architecture information.

# Kernel Version
uname -r

# Machine Architecture
uname -m 

To check the Linux distribution and version details:

# Linux Distribution and Version
lsb_release -a 

1.3 CPU Information

Detailed CPU information can be obtained using lscpu or by inspecting /proc/cpuinfo.

# Human-readable CPU Information
lscpu

# Detailed CPU Information
cat /proc/cpuinfo 

2. Device and Driver Information

Linux exposes detailed information about devices and their drivers through the /sys and /proc filesystems, as well as utility commands.

2.1 Listing Devices and Drivers

To check the drivers associated with devices and view system buses:

# List PCI Devices with Kernel Modules
lspci -k

# View Input Devices
cat /proc/bus/input/devices 

2.2 Exploring the /sys Filesystem

The /sys filesystem provides a way to interact with kernel objects.

For example, to check the status of a network interface:

# Network Interface Status
cat /sys/class/net/eth0/operstate 

To list available filesystems supported by the kernel:

# Available Filesystems
cat /proc/filesystems 

3. Hardware and Device Inspection

Understanding the hardware landscape and relationships between different buses and devices is essential for advanced device programming.

3.1 PCI Devices

The lspci command is used to inspect PCI devices:

# PCI Devices in Tree View
lspci -tv

# PCI Devices with Kernel Modules
lspci -k

# Verbose PCI Device Information
lspci -vv

# Filter PCI Devices (e.g., USB Controllers)
lspci -v | grep USB 

3.2 USB Devices

For USB device driver development:

# USB Devices in Tree View
lsusb -tv

# Verbose USB Device Information
lsusb -v

# Inspect Specific USB Device
lsusb -d <vendor_id:product_id> 

For debugging or interacting with USB devices programmatically, tools like usbmon and Wireshark can capture USB traffic.

3.3 Block Devices

To view block devices such as disks and partitions:

# List Block Devices
lsblk

# List Block Devices with Filesystem Information
lsblk -f 

3.4 SCSI Devices

For low-level inspection of SCSI devices:

# List SCSI Devices
lsscsi 

4. Interrupts and I/O Monitoring

Tracking interrupts and I/O performance is vital for optimizing system interactions with hardware.

4.1 Monitoring Interrupts

To view the number of interrupts per CPU for each I/O device:

# Interrupts per Device
cat /proc/interrupts 

This is useful when optimizing interrupt handling or diagnosing hardware IRQ conflicts.

4.2 Monitoring I/O Performance

The iostat and iotop commands help monitor I/O device performance:

# Extended I/O Statistics (updates every second)
iostat -x 1

# Block Device Statistics
iostat -d 1

# Monitor I/O Usage by Process
iotop 

5. Memory and Cache Information

Memory management and caching are critical when programming kernel modules or drivers.

5.1 Memory Information

To view detailed memory statistics:

# Memory Information
cat /proc/meminfo 

This provides information such as total memory, free memory, available swap, and cached memory.

5.2 Cache Information

To list memory cache information:

# Cache Information
sudo lshw -C memory 

6. Detailed Hardware Information

Delving into specific hardware capabilities is facilitated by tools for inspecting buses, devices, and subsystems.

6.1 Using lshw

The lshw command provides detailed information about the hardware configuration.

# Hardware Path Tree View
sudo lshw -short

# Class-specific Information
sudo lshw -short -C bus
sudo lshw -short -C cpu
sudo lshw -short -C storage

# Bus Information
sudo lshw -businfo 

7. Device Query and Configuration

Device-specific information can be extracted using hwinfo and other utilities.

7.1 Filesystems

To list all supported filesystems in the kernel:

# Supported Filesystems
cat /proc/filesystems 

This is useful when developing storage drivers or working with filesystems.

7.2 Using hwinfo

The hwinfo command provides detailed information about hardware components.

# Concise Hardware Summary
sudo hwinfo --short

# Detailed USB Information
sudo hwinfo --usb 

8. SCSI and Disk Management

For storage systems or SCSI devices, in-depth tools are necessary to inspect and configure devices.

8.1 SCSI Devices

To show the hierarchy of SCSI devices:

# SCSI Device Hierarchy
lsblk -s 

8.2 Disk Performance Monitoring

To obtain detailed SMART information for storage devices:

# SMART Information
sudo smartctl -a /dev/sda 

9. Monitoring Devices in Real-Time

Real-time monitoring of device and kernel activity is essential for performance tuning and debugging.

9.1 Process Monitoring

The top or htop commands can be used to monitor processes and system load:

# Real-time Process Monitoring
top

# Enhanced Process Monitoring
htop 

9.2 Block Device Monitoring

To monitor block device I/O:

# Block Device I/O Statistics
iostat -d 

9.3 Network Monitoring

To monitor network interfaces:

# Real-time Network Interface Monitoring
iftop 

10. Performance Monitoring

Tools like perf are used to monitor kernel and application performance, helping identify bottlenecks.

10.1 Using perf

To monitor CPU performance, system calls, and events:

# Live CPU Performance Monitoring
sudo perf top 

To record a performance profile:

# Record Performance Data
sudo perf record -a 

To display the recorded performance data:

# Report Performance Data
sudo perf report 

11. Kernel Configuration and Modules

Kernel modules extend system functionality without requiring a reboot and are commonly used in device driver development.

11.1 Listing Kernel Modules

To list all loaded kernel modules:

# List Loaded Kernel Modules
lsmod 

11.2 Loading and Unloading Modules

To load a kernel module manually:

# Load a Kernel Module
sudo modprobe <module_name> 

To unload a kernel module:

# Unload a Kernel Module
sudo modprobe -r <module_name> 

11.3 Module Information

To get detailed information about a specific module:

# Module Information
modinfo <module_name> 

This provides information such as module parameters, dependencies, and author.

11.4 Installing Kernel Headers

Kernel headers are required when building or debugging modules:

# Install Kernel Headers
sudo apt-get install linux-headers-$(uname -r) 

12. Kernel Logs and Debugging

Kernel logs are essential for tracking system errors and debugging device driver issues.

12.1 Viewing Kernel Logs

To view real-time kernel logs:

# Real-time Kernel Logs
dmesg -w 

Access kernel logs through the systemd journal:

# Kernel Logs via systemd journal
journalctl -k 

12.2 Filtering Logs

To filter logs and focus on specific messages:

# Filter Kernel Logs
dmesg | grep <keyword> 

Replace <keyword> with the module name or any relevant term to isolate specific log messages.

13. Kernel Modules and Hardware Interaction

Monitoring how kernel modules interact with hardware is crucial for device driver development.

13.1 Tracking Module Messages

To locate messages related to a specific kernel module:

# Module-related Kernel Messages
dmesg | grep <module_name> 

This assists in debugging issues like module loading or initialization.

14. Building and Inserting Kernel Modules

Kernel module development often requires building, inserting, and testing custom modules.

14.1 Compiling Kernel Modules

To compile the kernel module for the currently running kernel:

# Compile Kernel Module
make -C /lib/modules/$(uname -r)/build M=$(pwd) modules 

14.2 Inserting and Removing Kernel Modules

To insert the compiled module into the running kernel:

# Insert Kernel Module
sudo insmod mymodule.ko 

To remove the kernel module:

# Remove Kernel Module
sudo rmmod mymodule 

Note: Use the module name without the .ko extension when removing.

15. Debugging Kernel Modules

For kernel module developers, debugfs and ftrace are invaluable for exposing internal kernel data and tracing function calls.

15.1 Mounting debugfs

To mount the debugfs filesystem:

# Mount debugfs
sudo mount -t debugfs none /sys/kernel/debug 

15.2 Using ftrace

To trace function calls or events in the kernel:

# Set the Current Tracer to 'function'
echo function | sudo tee /sys/kernel/debug/tracing/current_tracer 

To view the trace output:

# View Tracing Output
cat /sys/kernel/debug/tracing/trace 

This allows tracing of function calls, which is invaluable for kernel debugging.

16. Kernel Parameters and Tuning

Kernel parameters can be inspected and tweaked using the /proc/sys directory or via the sysctl command.

16.1 Viewing Kernel Parameters

To view all kernel parameters:

# View All Kernel Parameters
sysctl -a 

16.2 Modifying Kernel Parameters

To modify a kernel parameter (e.g., increasing the maximum number of open files):

# Increase Maximum Open Files
sudo sysctl -w fs.file-max=100000 

To make the change permanent, add the parameter to /etc/sysctl.conf.

17. Building Custom Kernels and Modules

Building custom kernels or kernel modules is sometimes necessary when developing for the Linux kernel.

17.1 Compiling the Kernel

To configure and build a custom kernel:

# Configure Kernel Options
make menuconfig

# Compile the Kernel
make

# Install the Kernel and Modules
sudo make modules_install install 

17.2 Compiling and Inserting Custom Kernel Modules

For building and inserting custom kernel modules:

# Compile Kernel Module
make -C /lib/modules/$(uname -r)/build M=$(pwd) modules

# Insert Kernel Module
sudo insmod mymodule.ko

# Remove Kernel Module
sudo rmmod mymodule 

Linux Kernel Programming and Module Compilation on Raspberry Pi

Linux kernel programming on the Raspberry Pi presents unique challenges and considerations compared to traditional desktop or server environments. The Raspberry Pi, being a single-board computer based on the ARM architecture, requires specific approaches for kernel development, module compilation, and device driver integration. This guide explores the differences, methodologies, and best practices for effective kernel programming on the Raspberry Pi.


(A) Differences in Linux Kernel Programming on Raspberry Pi

A-1) Architectural Differences (ARM vs. x86) A-2) Cross-Compilation A-3) Kernel Versions and Distribution

(B) Module Compilation on Raspberry Pi

B-1) Setting Up the Development Environment B-2) Cross-Compiling Kernel Modules B-3) Loading and Unloading Modules

(C) Kernel Compilation on Raspberry Pi

C-1) Reasons for Compiling the Kernel C-2) Obtaining the Source Code C-3) Configuring the Kernel C-4) Building and Installing the Kernel
Back to Top