When archiving large folders for storage across multiple DVDs, encryption can be added to enhance security. The following details the steps for archiving, splitting, and encrypting, with a focus on flexible and secure methods.
To efficiently manage large folders for single-layer DVD storage (approx. 4.5GB), the tar
and split
commands can be combined:
tar -zcvf - folder_name | split -b 4500M - archive_name.tar.gz.
.tar.gz
archive, outputting it to standard output.tar
command and splits the archive into 4.5GB chunks. The files are named sequentially, e.g., archive_name.tar.gz.aa
, archive_name.tar.gz.ab
, etc.To restore the split archive, the following steps can be used to concatenate the parts and extract the archive:
cat archive_name.tar.gz.* > full_archive.tar.gz
tar -zxvf full_archive.tar.gz
Alternatively, both steps may be combined into one:
cat archive_name.tar.gz.* | tar -zxvf -
type archive_name.tar.gz.* | tar -zxvf - (Windows)
The split
command allows for adjustments based on DVD size:
FOLDER="./folder_name" && tar -zcvf - "$FOLDER" | split -b 4480M - "${FOLDER}.tar.gz."
FOLDER="./folder_name" && tar -zcvf - "$FOLDER" | split -b 8150M - "${FOLDER}.tar.gz."
In each case, the archive is split into chunks that can fit onto DVDs, with the files being named sequentially.
Encryption can be added using external tools like GPG or OpenSSL since neither tar
nor split
directly support password protection.
GPG provides AES-256 symmetric encryption for securing the tarball with a password.
tar -zcvf - folder_name | gpg --symmetric --cipher-algo AES256 -o archive_name.tar.gz.gpg
split -b 4500M archive_name.tar.gz.gpg archive_name_split.
cat archive_name_split.* > full_archive.tar.gz.gpg
gpg -o full_archive.tar.gz -d full_archive.tar.gz.gpg
tar -zxvf full_archive.tar.gz
This restores the archive by combining the parts, decrypting, and extracting the tarball.
OpenSSL is another option for adding password-based encryption to the tar archive.
tar -zcvf - folder_name | openssl enc -aes-256-cbc -e -k 'password' -out archive_name.tar.gz.enc
split -b 4500M archive_name.tar.gz.enc archive_name_split.
cat archive_name_split.* > full_archive.tar.gz.enc
openssl enc -aes-256-cbc -d -k 'password' -in full_archive.tar.gz.enc -out full_archive.tar.gz
tar -zxvf full_archive.tar.gz
This process ensures that large folders can be efficiently archived, split, and encrypted for secure storage across multiple DVDs.
sips
(Scriptable Image Processing System)To convert a PNG file to a JPEG file, the following command can be used:
sips -s format jpeg input.png --out output.jpg
sips
: The command-line utility for image processing on macOS.-s format jpeg
: Sets the output format to JPEG.input.png
: Refers to the input PNG file.--out output.jpg
: Specifies the name of the output file, which will be in JPEG format.Note: The extensions .jpg
and .jpeg
are interchangeable. The sips
command processes both formats the same way.
To convert a JPEG (or JPG) file to a PNG file, the following command is used:
sips -s format png input.jpg --out output.png
-s format png
: Sets the output format to PNG.input.jpg
: Refers to the input JPEG or JPG file.--out output.png
: Specifies the name of the output file, which will be in PNG format.During undergraduate studies in Computer Science, Emacs was recommended and has been used for over two decades. Familiarity with its shortcuts has facilitated work in C kernel programming and debugging. This document serves both as a guide for readers to grasp the benefits of Emacs and as a resource for personal learning, combining well-known features with newly explored aspects intended for future use.
Emacs is particularly well-suited for individuals who prefer a fully keyboard-driven workflow. This feature enables the execution of virtually any task—be it editing text, managing files, running commands, or browsing the web—without relying on a mouse. Such efficiency stands as one of the most compelling reasons users continue to utilize Emacs even after many years.
Ctrl+a
moves to the beginning of a line.Ctrl+e
moves to the end of a line.Ctrl+f
and Ctrl+b
move the cursor forward and backward by characters, respectively.Buffers in Emacs are fundamental components that refer to any open file, running process, or even a help screen. They allow the management of multiple tasks or documents simultaneously without cluttering the workspace with numerous windows or applications.
Ctrl+x b
. This command presents a list of buffers, enabling quick navigation. Alternatively, Ctrl+x Ctrl+b
opens a more detailed buffer list, displaying all current buffers, including unsaved ones, shell outputs, or other processes.Ctrl+3
(vertical split) and Ctrl+2
(horizontal split) allow viewing multiple buffers concurrently. This is particularly useful for comparing documents, keeping notes open while coding, or reading documentation alongside writing. Additionally, windows can be resized dynamically using Ctrl+x +
or Ctrl+x -
to adjust the layout according to current tasks.Ctrl+x o
cycles through the open windows, enabling seamless multitasking. Each window can display a different buffer, and Emacs retains the window configuration, facilitating easy return to specific setups.Emacs provides robust tools for compiling and debugging code, which are essential for tasks such as kernel programming in C. These features streamline the development process by integrating compilation and debugging directly within the editor.
Functionality | Command | Description |
---|---|---|
Executing the Compile Command | ESC+x compile |
Initiates the compilation process for the current project or file. Prompts for the compile command, which can be customized as needed (e.g., make for kernel programming). |
Navigating Compilation Errors | Ctrl+x ` (backtick) |
Jumps to the next error in the compilation output. Emacs parses the compilation buffer and highlights errors, enabling quick navigation to problematic lines in the source code. |
Launching GDB | ESC+x gdb |
Launches the GNU Debugger (GDB) within Emacs, providing an interface to set breakpoints, step through code, inspect variables, and evaluate expressions directly from the editor. |
Setting Breakpoints | Ctrl+x Ctrl+b |
Sets a breakpoint at the current line in the source code. Breakpoints allow the debugger to pause execution at specific points, facilitating the inspection of program state. |
Stepping Through Code | n (next)s (step)c (continue) |
Executes the next line of code, steps into functions for detailed inspection, and continues execution until the next breakpoint or end of the program. |
Inspecting Variables | ESC+x gdb-many-windows |
Opens multiple debugging windows, including source code, assembly, registers, and variable lists, aiding in monitoring the state of variables and program flow during debugging sessions. |
Emacs' compilation and debugging capabilities make it a powerful tool for kernel programming in C, offering an all-encompassing environment that supports efficient and effective development practices.
EWW (Emacs Web Wowser) is a built-in web browser in Emacs that allows browsing the web within a text-based environment. Although minimal compared to graphical browsers, EWW provides an efficient means to navigate the web while fully leveraging the keyboard-driven workflow appreciated by many Emacs users.
Functionality | ___Command___ | Description |
---|---|---|
Opening a URL | ESC+x eww |
Enter the URL or search term to visit a webpage. EWW will load the page within a buffer. |
Navigating Between Pages | l (Back)r (Forward)g (Reload) |
l returns to the previous page, r moves forward in history, and g reloads the current page. |
Scrolling | 1' | Scroll through the page by screen or line increments. |
Following Links | Enter |
Position the cursor over a link and press Enter to follow it. |
Opening Links in New Buffers | Ctrl + Shift + Enter |
Opens the link in a new buffer, allowing multitasking across several web pages. |
Returning to the Home Page | h |
Navigates back to the home page (if set) or the default Emacs home page. |
Bookmark a Page | b |
Bookmarks the current page for quick access later without remembering the URL. |
View Bookmarks | B |
Lists all bookmarks, allowing direct access to any saved page. |
Viewing Browsing History | H |
Displays a list of previously visited pages, navigable with arrow keys or by entering corresponding numbers. |
Toggle Images | I |
Toggles the display of images on or off. |
Source View | 2' | Opens the raw HTML source code of the current page in a new buffer. |
Change Search Engine | 3' | Customizes the default search engine used by EWW. |
1': Ctrl+v
(Page Down), Meta+v
(Page Up), Arrow Keys / Ctrl+n
(Down) / Ctrl+p
(Up)
2': ESC+x eww-view-source
3': Add (setq eww-search-prefix "https://www.google.com/search?q=")
to configuration
While EWW does not replace full-featured browsers like Firefox for multimedia-heavy browsing or complex web applications, it offers an efficient, minimalistic browsing experience for those who prefer staying within the Emacs ecosystem and rely on text-based content.
Dired (Directory Editor) mode in Emacs provides a powerful and interactive method for managing files. It facilitates browsing and manipulating files and directories within the editor, thereby streamlining file system operations.
Functionality | ___Command___ | Description |
---|---|---|
Launching Dired | ESC+x dired |
Opens Dired mode, prompting for a directory path. The specified directory is then displayed for file and directory management within Emacs. |
File Operations | C (Copy)R (Rename)D (Delete) |
Executes basic file operations such as copying, renaming, and deleting. Can be performed on single or multiple files for batch operations. |
Directory Navigation | Enter ^ |
Enter opens the directory or file under the cursor, while ^ moves up one directory level. |
Marking Files | m (Mark)u (Unmark) |
Marks files for batch operations and unmarks them as needed, allowing multiple files to be acted upon simultaneously. |
Opening Files | Enter or f |
Opens the file under the cursor in a new buffer. |
Sorting Files | s (Sort) |
Sorts files by various criteria such as name, size, or modification date to enhance file management efficiency. |
Recursive Directory Management | g (Revert Buffer) |
Performs recursive operations on files within subdirectories without needing to navigate into each one individually. |
Executing Shell Commands | ! (Shell Command) |
Executes shell commands directly from within Dired on selected files, facilitating tasks like batch renaming or compression. |
Dired mode transforms Emacs into a comprehensive file management system, providing the necessary tools to handle complex file operations without leaving the editor environment.
Emacs Lisp (Elisp) is the programming language embedded within Emacs, allowing for extensive customization and extension of the editor's capabilities. Emacs Lisp enables the writing of scripts, defining new commands, and creating custom workflows tailored to individual needs.
Emacs Lisp can be used to remap existing key bindings or create new ones, enhancing the efficiency of the keyboard-driven workflow. For example, binding a frequently used command to a simpler key combination can streamline operations.
;; Example: Bind F5 to save all buffers (global-set-key (kbd "<f5>") 'save-some-buffers)
Repetitive tasks can be automated using Emacs Lisp, reducing the need for manual intervention and minimizing the potential for errors. Automating file operations, text transformations, or buffer management are common applications.
;; Example: Automatically delete trailing whitespace on save (add-hook 'before-save-hook 'delete-trailing-whitespace)
Users can define new interactive commands to perform specialized functions, enhancing the editor's functionality to suit specific workflows or projects.
;; Example: Define a command to insert the current date (defun insert-current-date () "Insert the current date at point." (interactive) (insert (format-time-string "%Y-%m-%d"))) (global-set-key (kbd "C-c d") 'insert-current-date)
Emacs Lisp allows for the creation of new major or minor modes, providing tailored environments for different programming languages, file types, or project requirements.
;; Example: Define a simple minor mode (define-minor-mode my-custom-mode "A simple custom minor mode." :lighter " MyMode" :keymap (let ((map (make-sparse-keymap))) (define-key map (kbd "C-c m") 'insert-current-date) map)) (add-hook 'text-mode-hook 'my-custom-mode)
Emacs Lisp empowers users to transform Emacs into a highly personalized and powerful development environment. By leveraging Emacs Lisp, users can tailor Emacs to meet their unique requirements, enhancing productivity and fostering an efficient workflow.
Several command-line switches enhance Emacs' operation, similar to the -nw
(no-window) option. These switches provide flexibility in how Emacs is launched, catering to various user needs and preferences.
Switch_Options | Description |
---|---|
-q |
Starts Emacs without loading the initialization file (.emacs or init.el ). Useful for troubleshooting configuration issues or starting Emacs with default settings. |
--no-splash |
Launches Emacs without displaying the splash screen, resulting in a cleaner and faster startup experience. |
--daemon |
Runs Emacs in the background as a daemon, allowing subsequent Emacs instances to open more quickly by connecting to the already running process. Particularly beneficial for users who frequently start and stop Emacs sessions. |
-batch |
Executes Emacs in batch mode, without opening the graphical or text interface. Typically used for script execution or automation tasks, enabling Emacs to process files and perform operations without user interaction. |
--debug-init |
Starts Emacs with debugging enabled for the initialization process, aiding in the identification and resolution of errors within startup configuration files. |
These switches provide users with the ability to customize the Emacs startup behavior, enhancing the overall user experience by aligning Emacs' operation with specific requirements and use cases.
Ctrl+k
: Deletes from the cursor to the end of the line and stores the deleted content in the kill ring (Emacs' clipboard equivalent).ESC+x compile
: Executes the compile
command, enabling code compilation within Emacs, which is particularly useful for developers.ESC+x query-replace
: Initiates an interactive find-and-replace operation, prompting for confirmation before each replacement.ESC+x replace-string
: Performs a non-interactive find-and-replace, replacing all occurrences of the specified string.ESC+x shell
: Opens a shell within Emacs, providing access to a command-line interface directly from the editor.Ctrl+space, ESC+w
: Marks a region for copying and then copies the selected text into the kill ring.Ctrl+y
: Pastes (or "yanks") the most recently copied or cut text from the kill ring.Ctrl+y followed by ESC+y
: Cycles through the kill ring, enabling the pasting of previously copied or cut items.Ctrl+x u
: Undoes the most recent changes. This command can be repeated to undo multiple actions.Emacs offers several native commands for interactive or automatic string substitution. The macOS convention is used throughout (⌥ = Meta (M), ⌘ = Super (s)).
query-replace
) — step‑by‑step confirmation for each match.query-replace-regexp
) — regular‑expression variant with identical prompts.Command | Scope & confirmation | Pattern type | Typical keystroke |
---|---|---|---|
query‑replace | Interactive, buffer or region | Literal | M % |
query‑replace‑regexp | Interactive, buffer or region | Emacs Lisp regexp | M ⇧ % |
replace‑string | Automatic, buffer or region | Literal | M‑x replace-string |
The following examples illustrate practical refactoring patterns and the reasoning behind each step.
M ⇧ % ^\(defun\s-+\)old_\(.*\)$ RET \1new_\2 RET !
What happens:
^\(defun\s-+\)
captures the function keyword plus its required space into Group 1.old_\(.*\)$
captures the remainder of the symbol (e.g. old_process
) into Group 2.\1new_\2
rebuilds each definition as (defun new_process …), preserving the original suffix.This technique is ideal for systematic API renaming after a naming‑policy change.
Regional replacement is particularly useful when refactoring temporary variables inside a long file while leaving other sections untouched.
C r M %
The prefix C r calls query-replace
in reverse, scanning from point toward the beginning of the buffer (BOB).
Reverse traversal prevents accidental double replacements when iterating through matches already passed during forward edits.
M ⇧ % ,\s-*\\n RET ,\n\t RET !
Purpose: re‑formatting comma‑separated JSON arrays so that each element begins on a new, indented line.
\s-*
matches any horizontal whitespace.\\n
, ensuring the match includes the line break itself.\n
) followed by a tab (\t
) before the next array element.
The command may be combined with narrowing (C‑x n n
) to focus on a JSON block without disturbing surrounding code.
M‑x occur
followed by C‑c C‑o
turns the *Occur* buffer writable; committed changes propagate back.Q
invokes dired-do-query-replace-regexp
across marked files.next-error
or grep
.C‑x n n
) or operate within occur/grep buffers to avoid unintended files.\n \t \1
) follow Emacs Lisp conventions, not POSIX syntax.Written on May 11, 2025
In 2019, Apple officially adopted Zsh (Z Shell) as the default shell, starting with macOS Catalina (10.15). This transition marked a significant change from the previously utilized Bash, which had been the default since the inception of macOS. The switch was largely driven by licensing issues and the enhanced features offered by Zsh, making it a more appealing choice for modern developers and power users.
Apple's decision to shift from Bash to Zsh was influenced substantially by licensing concerns. Until version 3.2, Bash was licensed under the GNU General Public License v2 (GPLv2), which posed fewer restrictions on redistribution and modification. Apple continued using this version for many years.
However, with the release of Bash 4.0, the license changed to GPLv3, which introduced stricter conditions. Under GPLv3:
By transitioning to Zsh, which is licensed under an MIT-like license, Apple was able to circumvent these issues. This permissive license allowed Apple to include Zsh without the obligation to disclose proprietary modifications, aligning more effectively with Apple’s distribution model.
Apart from addressing licensing concerns, Zsh provided various technical advantages that improved the user experience and rendered it a more suitable choice for Apple’s ecosystem.
1. Permissive Licensing
The MIT-like license associated with Zsh afforded Apple greater flexibility. Unlike GPLv3, it does not impose the requirement to share modifications, permitting Apple to distribute Zsh freely without concerns over proprietary rights.
2. Enhanced Features for Power Users
Zsh offers a range of features that enhance productivity and streamline shell interactions, which are particularly beneficial for developers:
3. User-Configurable Options and Prompt Customization
Zsh supports a broad spectrum of configuration options, enabling users to personalize nearly every aspect of the shell. This includes the capability to create dynamic prompts that display real-time information, contributing to a more informative and engaging terminal experience.
Zsh’s popularity among developers and system administrators has fostered a vibrant community that actively provides resources, such as:
In adopting Zsh as the default shell, Apple aligned with the preferences of a considerable portion of its developer user base. Many developers had already embraced Zsh for its advanced features, and the switch made macOS more intuitive and appealing to this audience.
Shifting to Zsh also facilitated a departure from the aging Bash 3.2, bringing several advantages in terms of security and maintainability:
scp
in Zsh1. Local vs. Remote Expansion
When utilizing wildcards with scp
, it is important to recognize that Zsh may attempt to expand these wildcards locally before executing the command. For example, a command intended to copy all .txt
files from a remote server might resemble:
scp user@remote:/path/to/files/*.txt /local/destination/
Zsh might expand *.txt
based on the local file system, potentially leading to unintended behavior. This happens because Zsh’s default behavior involves expanding wildcards during the globbing phase, which occurs before the command is executed. If matching files exist in the specified local path, Zsh replaces the wildcard with these files.
2. Why Escaping Wildcards Works
To ensure the wildcard is interpreted on the remote server rather than locally, escaping the wildcard with \*
is necessary:
scp user@remote:/path/to/files/\*.txt /local/destination/
Escaping the asterisk directs Zsh to pass the wildcard to scp
without local expansion, allowing the remote shell to interpret *.txt
and carry out the intended file selection.
Several techniques can prevent Zsh from performing local expansion on wildcards meant for remote servers:
scp 'user@remote:/path/to/files/*.txt' /local/destination/
noglob
: Zsh’s noglob
directive disables wildcard expansion for the specified command.
noglob scp user@remote:/path/to/files/*.txt /local/destination/
rsync
for Complex Transfers: For advanced file transfers, especially those involving recursion and selective inclusion/exclusion, rsync
offers better control over wildcard patterns.
rsync -av --include='*.txt' --exclude='*' user@remote:/path/to/files/ /local/destination/
scp
For verification and troubleshooting, the manner in which Zsh interprets a command can be checked by prepending it with echo
:
echo scp user@remote:/path/to/files/\*.txt /local/destination/
Alternatively, using the -v
option with scp
yields verbose output, aiding in the diagnosis of file transfer issues:
scp -v 'user@remote:/path/to/files/*.txt' /local/destination/
\*
, quotes, or noglob
to ensure wildcards are processed on the remote server.setopt
and unsetopt
commands in Zsh allow for adjustments to wildcard handling. Reviewing these settings can assist in tailoring Zsh’s behavior to specific needs.Configuring environment variables and adding aliases or functions for frequently used commands can greatly enhance efficiency in the command-line environment. This guide provides detailed instructions on how to set environment variables temporarily and permanently, both for individual users and system-wide, as well as how to add aliases and functions in zsh or bash shells, applicable to both Linux and macOS systems.
To set an environment variable for the current terminal session, use the export
command. This change will only persist for the duration of the session and will be cleared once the terminal is closed.
Example: To temporarily set the PYTHONPATH
environment variable:
# Temporarily set PYTHONPATH
export PYTHONPATH="/path/to/python/libs"
This sets the PYTHONPATH
variable to include the specified directory for the current session.
To make environment variables persist across sessions, they must be added to the shell's configuration file. For zsh users, this is typically ~/.zshrc
; for bash users, it is ~/.bashrc
.
# Open .zshrc with a text editor
emacs ~/.zshrc
# Open .bashrc with a text editor
emacs ~/.bashrc
For example, to set the PYTHONPATH
environment variable permanently:
# Set PYTHONPATH permanently
export PYTHONPATH="/path/to/python/libs"
If using pyenv
, it may be necessary to add:
# Set up pyenv
export PYENV_ROOT="$HOME/.pyenv"
export PATH="$PYENV_ROOT/bin:$PATH"
Save the file and exit the text editor.
# Apply changes
source ~/.zshrc
# Apply changes
source ~/.bashrc
Note: On macOS, the default shell is zsh (since macOS Catalina). The same steps apply for setting environment variables in zsh on macOS.
For environment variables that should be available to all users on the system, add them to system-wide configuration files. On Linux, these files are /etc/environment
, /etc/profile
, or /etc/bash.bashrc
. On macOS, system-wide configurations for zsh can be added to /etc/zshenv
or /etc/zshrc
.
/etc/environment
:
sudo emacs /etc/environment
/etc/profile
:
sudo emacs /etc/profile
sudo emacs /etc/zshrc
For example, to set PYTHONPATH
globally:
# Set PYTHONPATH globally
export PYTHONPATH="/usr/local/lib/python3.9/site-packages"
Save the file and exit the text editor.
To apply the changes, log out and log back in, or source the configuration file. Note that changes to some system-wide files may require a system reboot or re-login to take effect.
Aliases and functions allow for efficient command reuse and can be added to the shell's configuration files.
Aliases are shortcuts for commands. To add aliases:
# Open .zshrc. If absent, use .zprofile
emacs ~/.zshrc
# Open .bashrc
emacs ~/.bashrc
# Alias to compress and split a folder
alias compress_folder='FOLDER="folder_name" && tar -zcvf - "$FOLDER" | split -b 4480M - "${FOLDER}.tar.gz."'
# Alias to search a specific folder for a pattern
alias grep_designated_folder='find /path/to/designated/folder -type f -print0 | xargs -0 grep -i "###" 1> tmp1 2> tmp2'
compress_folder
: Compresses and splits a folder into chunks.grep_designated_folder
: Searches a specific folder for a pattern.# Apply alias changes for zsh
source ~/.zshrc
# Apply alias changes for zsh when using .zprofile
source ~/.zprofile
# Apply alias changes for bash
source ~/.bashrc
Functions provide more flexibility with parameters than aliases. To add functions:
Add Function Definitions
# Function to compress a folder with a given name
compress_folder() {
FOLDER="$1"
tar -zcvf - "$FOLDER" | split -b 4480M - "${FOLDER}.tar.gz."
}
# Function to search a specified folder for a given pattern
grep_designated_folder() {
find "$1" -type f -print0 | xargs -0 grep -i "$2" 1> tmp1 2> tmp2
}
compress_folder
: Accepts a folder name as an argument and compresses it.grep_designated_folder
: Searches a specified folder for a given pattern.alias gonginx="cd /opt/homebrew/etc/nginx/"
alias gohttp="cd /opt/homebrew/var/www/"
cd /opt/homebrew/var/www
function tar_backup_prototype() {
cd /opt/homebrew/var || return
filename="WEB$(date +"%Y%m%d")"
tar -zcvf "${filename}.tar.gz" www/
cd /opt/homebrew/var/www || return
}
function tar_backup() {
cd /opt/homebrew/var || return
# Initial base filename
base_filename="WEB$(date +"%Y%m%d")"
filename="${base_filename}.tar.gz"
# Check if the filename already exists, and append a counter if necessary
counter=1
while [ -e "$filename" ]; do
filename="${base_filename}_${counter}.tar.gz"
counter=$((counter + 1))
done
# Create the tar archive with the unique filename
tar -zcvf "$filename" www/
# Return to the specified directory
cd /opt/homebrew/var/www || return
}
function tar_web() {
# Check if a filename argument is provided
if [ -z "$1" ]; then
echo "Usage: tar_web "
return 1 # Exit the function with a non-zero status
fi
# Navigate to the specified directory or exit if it fails
cd /opt/homebrew/var || return
# Create the tar.gz archive with the provided filename
tar -zcvf "${1}.tar.gz" www/
# Navigate back to the www directory or exit if it fails
cd /opt/homebrew/var/www || return
}
function scp_backup_today() {
scp "ngene.org:/opt/homebrew/var/WEB$(date +"%Y%m%d")*.tar.gz" ~/Desktop/
}
scp2web() {
local filename="$1"
scp "${filename}"* ngene.org:/opt/homebrew/var/www/
}
eval "$(/opt/homebrew/bin/brew shellenv)"
export PATH="/opt/homebrew/sbin:$PATH"
alias zprofile_change='emacs ~/.zprofile'
alias zprofile_apply='source ~/.zprofile'
# Function to search for a specific term within files under a specified directory, case-insensitive.
function file_grep() {
# Check if both search term and search path are provided
if [[ -z "$1" || -z "$2" ]]; then
echo "Usage: file_grep "
echo "Example: file_grep \"nginx\" /opt/homebrew"
return 1
fi
# Assign arguments to variables for clarity
search_term="$1"
search_path="$2"
# Execute the search command
sudo find "$search_path" -type f -print0 | xargs -0 grep -i "$search_term"
}
# Function to search files by regex in a specified directory (case-insensitive)
function find_re() {
# Display usage instructions if arguments are missing
if [[ -z "$1" || -z "$2" ]]; then
echo "Usage: find_re "
echo "Example: find_re /opt/homebrew '.*frank.*'"
return 1
fi
# Assign arguments to variables for readability
search_path="$1"
regex_pattern="$2"
# Execute the find command with case-insensitive regex
find "$search_path" -type f -iregex "$regex_pattern"
}
# Function to search for files with a case-insensitive substring match in the filename
function find_str() {
# Display usage instructions if arguments are missing
if [[ -z "$1" || -z "$2" ]]; then
echo "Usage: find_str "
echo "Example: find_str /opt/homebrew '###'"
return 1
fi
# Assign arguments to variables for clarity
search_path="$1"
search_text="$2"
# Execute the find command with case-insensitive name matching
find "$search_path" -type f -iname "*$search_text*"
}
##################################
alias emacs="emacs -nw"
Once the environment variables, aliases, or functions are set in the shell's configuration file, they become available in every new terminal session.
Using the compress_folder
Function: To compress and split a folder named folder_name
, run:
# Compress and split a folder
compress_folder folder_name
Automatic Environment Variables: The PYTHONPATH
variable will be automatically set upon opening a new terminal, allowing Python to locate additional libraries specified in the path.
ls
-R
)
The ls -R
command lists all files and directories recursively. This means it will display the contents of the current directory and all subdirectories, which is useful for viewing a complete directory structure.
-lh
)
The ls -lh
command displays file sizes in a human-readable format, showing sizes in kilobytes (KB), megabytes (MB), or gigabytes (GB), as appropriate. This makes it easier to understand file sizes at a glance compared to the default byte-based format.
-lart
)
The ls -lart
command combines several options to provide an advanced view of files:
-l
: Displays files in a long format, including permissions, ownership, size, and modification date.-a
: Includes hidden files (those starting with a dot) in the listing.-r
: Reverses the order, showing the oldest files first.-t
: Sorts files by modification time, placing the most recently modified files at the end of the list when used with -r
.-lSr
)
The ls -lSr
command lists files by size in ascending order:
-l
: Shows detailed file information, such as file size, permissions, and ownership.-S
: Sorts files by size, starting with the largest.-r
: Reverses the order, showing the smallest files first.--color=auto
)
The ls --color=auto
command adds color to the output, distinguishing files, directories, and symbolic links by color. This visual enhancement simplifies identification of different types of files within the terminal.
rm
When a file name contains spaces, quotes are necessary to ensure the shell interprets it correctly. For example:
rm "file with spaces.txt"
In this case, either single or double quotes can be used to handle the spaces in the file name properly.
An alternative method for handling file names with spaces is to escape each space with a backslash (\
):
rm file\ with\ spaces.txt
This approach is particularly effective when dealing with multiple files or file names containing special characters directly in the terminal.
--preserve-root
)
The rm -rf --preserve-root
command adds an extra safeguard, ensuring the root directory (/
) is never deleted. This is crucial to prevent accidental system-wide deletion.
rm !(*.txt)
)
The rm !(*.txt)
command deletes all files except those matching a specific pattern, such as text files. This requires enabling extglob
with the command:
shopt -s extglob
This method provides control over batch deletion while protecting specific file types.
cp
– Enhanced Copying
The cp
command offers several enhancements:
cp -u source destination
: Only copies files if the source is newer than the destination, making it useful for backup scripts.cp --parents file /path/to/destination
: Copies the file along with its directory structure, preserving the hierarchy.
For more efficient copying, consider using rsync
:
rsync -avh source destination
rsync
is optimized for large transfers, preserving permissions and utilizing compression.
mv
– Moving with Precision
The mv
command can be used with additional options:
mv -u source destination
: Moves files only if the source is newer or if the destination file does not exist.find . -name "*.bak" -exec mv {} /backup/ \;
: Moves all .bak
files from the current directory to /backup/
, combining find
with mv
for complex file operations.find
– Complex Searchfind /path -mtime -1
: Finds files modified in the last day.find /path -type f -exec chmod 644 {} \;
: Finds files and changes their permissions in bulk.find /path -name '*.log' -size +10M -delete
: Deletes log files larger than 10MB.awk
– Advanced Text Processingawk '{print $1, $3}' file
: Extracts and prints specific columns from a file (e.g., column 1 and column 3).awk '/pattern/ {print $0}' file
: Prints lines matching a pattern.Combined with process monitoring, the following command lists users and processes consuming more than 50% CPU:
ps aux | awk '$3 > 50 {print $1, $3, $11}'
sed
– Streamlined Text Editingsed -i 's/old/new/g' file
: Replaces all occurrences of old
with new
within a file.sed '/pattern/d' file
: Deletes lines matching a pattern.df
and du
– Disk Usage Analysisdf -hT
: Displays disk usage in a human-readable format with file system type.du -sh * | sort -rh
: Sorts files and directories by size, providing a clear overview of storage usage.du --max-depth=1 /path
: Shows disk usage for directories up to a specific depth, helping to identify space hogs quickly.ps
and top
– Process Monitoringps -eo pid,ppid,cmd,%mem,%cpu --sort=-%mem
: Lists processes sorted by memory usage, helping to monitor resource-intensive tasks.top -o %MEM
: Sorts processes by memory usage, prioritizing resource consumption monitoring.netstat
and ss
– Network Monitoringnetstat -tuln
: Lists all open ports and their associated services.ss -tuln
: A faster alternative to netstat
, providing similar insights into network activity.xargs
– Efficient Command Chainingfind /path -name '*.log' | xargs rm
: Finds all log files and passes them to rm
for deletion.cat file.txt | xargs -n 1 echo
: Feeds each line of a file into echo
one by one, enabling efficient multi-line processing.alias
– Command Shortcuts
Setting an alias in ~/.bashrc
or ~/.zshrc
helps reduce repetitive typing:
alias proj="cd /home/user/projects"
alias ll="ls -lh"
alias cls="clear"
rsync
– Smart Synchronizationrsync -avz --progress source/ destination/
: Synchronizes files between two directories with compression and displays progress.rsync -avz --delete source/ destination/
: Deletes files from the destination that do not exist in the source, ideal for mirroring directories.These advanced commands and techniques offer powerful control over file management, system monitoring, and data transfers in Linux.
(A) grep -r "###" /path/to/designated/folder
This command searches recursively for the string "###" in all files and subdirectories under the specified directory.
-r
: The recursive flag, which enables searching through all files and subdirectories within the provided directory.Limitations:
(B) grep -rl "###" /path/to/designated/folder
This command functions similarly to the first, with the addition of the -l
flag, which modifies the output to display only filenames containing the matching text, without showing the matching lines.
-r
: Recursively searches through all directories and files.-l
: Prints only the filenames where matches are found, excluding the matching lines from the output.Advantages:
Limitations:
-i
flag is added.(C) find /path/to/designated/folder -type f | xargs grep -i "###" 1> tmp1 2> tmp2
This command begins by using find
to list all files, then pipes the results to grep
via xargs
, which searches for the specified string "###".
-type f
: Restricts the search to regular files only, excluding directories or other file types.|
: The pipe operator, which takes the output of find
and passes it as input to the next command (xargs
).xargs
: Takes the output from find
and uses it as arguments for another command, in this case grep
.1> tmp1
: Redirects the standard output (such as matching lines and filenames) to a file named tmp1
.2> tmp2
: Redirects error messages (such as permission denied errors) to a file named tmp2
.Advantages:
-i
flag.Limitations:
-print0
and xargs -0
.(D) find /path/to/designated/folder -type f -print0 | xargs -0 grep -i "###" 1> tmp1 2> tmp2
This command enhances the previous one by handling filenames that contain spaces or special characters more effectively.
-print0
: Instructs find
to output filenames terminated by a null character (\0
), rather than a newline. This is beneficial for managing filenames that include spaces, special characters, or newlines.|
: The pipe operator, used to pass the output from find
to xargs
.xargs -0
: Instructs xargs
to expect null-terminated input, ensuring compatibility with find
's -print0
option and properly handling filenames with special characters.Advantages:
-i
flag.Limitations:
-print0
and xargs -0
.find "$(brew --prefix)" -name [tool_name] -type f
This document explains how to determine the installation path of a command-line tool on macOS. The instructions focus on scenarios involving Homebrew installations, though they can be adapted to any other method of tool installation. Every step and detail is preserved from earlier discussions but rearranged and generalized to maintain a high level of clarity and professionalism.
find "$(brew --prefix)" -name [tool_name] -type f
Command
This command is a convenient way to locate a specific file—such as a binary for a tool—within the directory structure managed by Homebrew. The example below uses [tool_name]
as a placeholder; substitute the actual tool’s name (e.g., ffmpeg
, git
, or any other executable).
find "$(brew --prefix)" -name [tool_name] -type f
brew --prefix
: Returns the base directory where Homebrew is installed. On Apple Silicon (M1/M2) systems, this is commonly /opt/homebrew
; on Intel-based Macs, /usr/local
; some custom Homebrew setups may differ.
$(...)
: When the shell processes brew --prefix
inside $(...)
, it replaces that portion of the command with the actual path (e.g., /opt/homebrew
), resulting in:
find "/opt/homebrew" -name [tool_name] -type f
find "$(brew --prefix)"
: Instructs the find
utility to begin searching in the Homebrew prefix directory returned by brew --prefix
.
-name [tool_name]
: Tells find
to look for files named exactly [tool_name]
.
-type f
: Restricts the search to regular files, excluding directories, symlinks, or other file types.
Outcome: The command scans Homebrew’s installation tree for files named [tool_name]
and helps pinpoint the exact location of the installed binary.
Some tools may not have been installed using Homebrew, or they may be installed in a location not covered by the Homebrew directory structure. In such cases, the following approaches can be used:
which
or command -v
which [tool_name]
or
command -v [tool_name]
If [tool_name]
is found in the PATH, these commands return the absolute path (for example, /usr/local/bin/[tool_name]
). If nothing is returned, the tool is not on the system’s PATH.
If the exact location remains unknown, it may be necessary to search the entire file system:
sudo find / -name [tool_name] -type f 2>/dev/null
/
: Starts the search from the root directory.2>/dev/null
: Redirects error messages (such as permission denials) to /dev/null
, creating a cleaner output.This approach can take significantly longer than searching only the Homebrew prefix because it scans every accessible directory on the system.
find "$(brew --prefix)" -name [tool_name] -type f
brew --prefix
ensures the command dynamically targets the correct location without manual path specification.
find
Utilityfind
command is included by default on macOS (as well as most Linux and Unix-like systems). There is no need to install additional software to run find
.
find
by directing it to a suspected location or by conducting a system-wide search.
ffmpeg
), the same syntax applies to any file name. Substitute [tool_name]
to locate other binaries or resources.
Written on February 12, 2025
Managing processes is a fundamental aspect of system administration in both Linux and macOS environments. Understanding how to check, search for, and control processes is essential for maintaining system performance and stability. This guide provides detailed instructions on managing processes, incorporating tools and commands available in both Linux and macOS.
ps
CommandThe ps
(process status) command provides a snapshot of current processes.
ps aux
a
: Displays processes from all users.u
: Shows processes with a user-oriented format.x
: Includes processes without a controlling terminal.ps -ef
-e
: Selects all processes.-f
: Displays a full-format listing.top
CommandThe top
command provides a dynamic, real-time view of running processes.
top
:
top
M
to sort by memory usage.P
to sort by CPU usage.q
to quit.Note: On macOS, top
has some differences in options and display.
top
Command Differences:
o
to change sort order. For example, to sort by memory:o mem
/
to search for a process.-s
option:top -s 5
top -n 20
top -u username
htop
Commandhtop
is an interactive process viewer with a user-friendly interface.
htop
:
sudo apt-get install htop # For Debian-based systems
brew install htop # For macOS
htop
:
htop
pstree
CommandDisplays processes in a tree format, showing parent-child relationships.
pstree
pgrep
Searches for processes based on name and other attributes.
pgrep process_name
-l
: Lists the process name alongside the PID.-u user_name
: Searches for processes owned by a specific user.ps
with grep
Filters the list of processes to find specific ones.
ps aux | grep process_name
grep
Process Itself:
ps aux | grep [p]rocess_name
top
and htop
top
: Press /
and type the process name to search.htop
: Press F3
and enter the process name.Processes consuming excessive CPU or memory can degrade system performance.
top
or htop
and sort by CPU usage.top
or htop
.Processes that are not functioning correctly or have become defunct.
ps
output, the STAT
column indicates the state.D
: Uninterruptible sleep (usually I/O).Z
: Zombie (terminated but not reaped by parent).T
: Stopped.R
: Running.S
: Sleeping.lsof
CommandLists open files and the processes that opened them.
lsof /path/to/file
lsof -i :port_number
sudo lsof -i :80
sudo lsof -i :22
lsof -p PID
netstat
CommandDisplays network-related information.
netstat -tulpn
netstat -anv | grep LISTEN
-a
: Display all sockets.-n
: Show numerical addresses without resolving hostnames.-v
: Verbose output.kill
CommandSends a signal to a process to terminate it.
kill PID
SIGTERM
(15): Requests a graceful shutdown.SIGKILL
(9): Forces immediate termination.SIGSTOP
(19): Stops (pauses) a process.SIGCONT
(18): Continues a stopped process.killall
CommandTerminates processes by name rather than PID.
killall process_name
-u user_name
: Kills processes owned by a specific user.-signal
: Sends a specific signal.Note: On macOS, killall
targets processes by their full command name as displayed in ps
or top
. The command is case-sensitive and requires the exact process name.
pkill
CommandSimilar to pgrep
, but sends signals to processes.
pkill process_name
-u user_name
: Targets processes of a specific user.-signal
: Specifies the signal to send.htop
F9
or k
to initiate the kill menu.SIGTERM
).vmstat
Reports virtual memory statistics.
vmstat 2 5
Note: On macOS, use vm_stat
(with an underscore).
vm_stat
iostat
Reports CPU and input/output statistics.
iostat 2 5
sar
(Linux Only)Collects, reports, or saves system activity information.
sar -u 2 5
Ensuring the correct process is targeted helps prevent unintended system behavior.
SIGTERM
: Allows the process to close files and release resources gracefully.SIGKILL
Only if Necessary: Forcefully terminates the process without cleanup.pstree
to understand relationships./var/log/
. Use tail
for real-time monitoring:tail -f /var/log/syslog
log show --last 1h
Some processes require root privileges to manage. Operate with the least privilege necessary.
sudo
when needed:sudo kill PID
By understanding and utilizing these tools and commands, processes can be effectively managed in both Linux and macOS environments, ensuring optimal system performance and stability.
In Unix-like environments and macOS, external drives such as SD cards or USB disks are typically mounted in specific directories, making them accessible from the command line. These drives may be automatically mounted in designated directories, or manual mounting can be employed for greater control.
On macOS, external drives are automatically mounted in the /Volumes
directory. Each drive appears as a folder within this directory, named according to the drive’s label, allowing for organized and predictable access.
cd /Volumes
ls
After navigating to /Volumes
, using the ls
command lists the mounted drives. For instance, if an SD card is labeled "SDCARD," access it directly by specifying the drive’s path:
cd /Volumes/SDCARD
In most Linux distributions, external drives are generally mounted in either /media
or /mnt
, with specific mounting practices depending on distribution and user configuration:
Drives are usually mounted automatically in /media/username/DRIVENAME
, where username
represents the logged-in user.
cd /media/username/DRIVENAME
For manual mounting, /mnt
is commonly used as a directory for temporary mounts. This process requires the use of the mount
command.
sudo mount /dev/sdX1 /mnt
cd /mnt
Replace /dev/sdX1
with the correct device name for the external drive. For example, /dev/sdb1
often denotes the first partition on a USB disk.
mount
Command for Manual MountingThe mount
command provides flexibility for mounting devices, allowing access to a variety of filesystems and external storage. The command is structured as follows:
sudo mount -o options device mount_point
/dev/sdb1
./mnt
or a subdirectory within /media
.ro
for read-only access or loop
for mounting ISO files.1. Mounting a USB Drive Manually: To mount a USB drive (e.g., /dev/sdb1
) to /mnt/usb
, use:
sudo mount /dev/sdb1 /mnt/usb
Before accessing the device, ensure that the /mnt/usb
directory exists, creating it if necessary:
sudo mkdir -p /mnt/usb
2. Mounting an ISO File as a Loop Device: ISO files are often mounted as loop devices, making their files accessible without burning them to physical media. This example mounts an ISO file as a read-only loop device using the iso9660
filesystem type:
sudo mount -o loop,ro -t iso9660 /path/to/file.iso /mnt/iso
3. Unmounting a Device: To safely remove a mounted device, unmount it using the umount
command:
sudo umount /mnt/usb
Ensuring the device is unmounted before physically disconnecting it helps prevent data loss or corruption.
These practices allow for flexible and efficient management of external drives and ISO files across Unix-like environments, providing consistent access through automatic and manual mounting techniques.
emacs ~/.zsh_history
The following instructions describe the process for accessing the complete shell command history using Emacs. The procedure is outlined in a systematic manner, providing details for both bash and zsh shells.
A shell history file must be identified before proceeding:
~/.bash_history
~/.zsh_history
Before opening the history file, it is advisable to ensure that the session’s command history is fully written to the history file.
Shell | Command | Description |
---|---|---|
bash | history -a |
Appends the session's recent commands to the history file. |
zsh | fc -W |
Writes the current session's history to the history file. |
Execute the corresponding command in the terminal to update the history file.
Once the history file is updated, Emacs can be used to view and search the command history. Launch Emacs with the appropriate file as follows:
emacs ~/.bash_history
emacs ~/.zsh_history
Opening the file in Emacs allows navigation, search, and editing of the complete command history.
Written on April 1, 2025
This guide provides detailed instructions for downloading Debian ISO files using Jigdo on a macOS system. The steps are organized to ensure clarity and efficiency, addressing potential challenges that may arise during the process.
Homebrew serves as a package manager for macOS, facilitating the installation of various software packages, including Jigdo.
brew install jigdo
Execute this command in the Terminal to install Jigdo.
brew list jigdo
Confirm that Jigdo has been installed correctly. Typical output includes executable files located in /opt/homebrew/Cellar/jigdo/0.8.2/bin/
, such as:
jigdo-file
jigdo-lite
jigdo-mirror
To utilize Jigdo commands seamlessly, ensure that Homebrew’s binary directory is included in the system’s PATH.
echo $PATH
Verify if /opt/homebrew/bin
is part of the PATH. If not present, proceed to update the PATH.
Add Homebrew’s binary directory to the PATH by editing the shell configuration file (~/.zshrc
for Zsh or ~/.bashrc
for Bash).
For Zsh users:
echo 'export PATH="/opt/homebrew/bin:$PATH"' >> ~/.zshrc
source ~/.zshrc
For Bash users:
echo 'export PATH="/opt/homebrew/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc
which jigdo-lite
A valid path, such as /opt/homebrew/bin/jigdo-lite
, indicates successful configuration.
If jigdo-lite
is still not found, manual linking may be required. Execute the following commands:
brew unlink jigdo && brew link jigdo
which jigdo-lite
To reconstruct Debian ISO files, both .jigdo
and .template
files are required. These files provide the necessary information and structure for the ISO assembly.
Navigate to the Debian Jigdo DVD Images. Download the corresponding .jigdo
and .template
files and save them in ~/Desktop/Debian/
.
With the necessary Jigdo files in place, proceed to download and assemble the Debian ISO.
cd ~/Desktop/Debian/
jigdo-lite debian-12.7.0-i386-DVD-1.jigdo
For scenarios involving multiple Jigdo and template files (e.g., 21 ISOs), scripting can automate the download process. Below are two versions of the Zsh script, each with different approaches for automating responses to Jigdo prompts.
For Zsh (using yes
for continuous "Enter" presses):
#!/bin/zsh
# Directory containing all .jigdo and .template files
JIGDO_DIR=~/Desktop/Debian/
# Navigate to the Jigdo directory
cd "$JIGDO_DIR" || exit
# Loop through all .jigdo files and execute jigdo-lite, with continuous "Enter" key presses
for jigdo_file in *.jigdo; do
echo "Processing $jigdo_file..."
# Use 'yes' to simulate continuous "Enter" presses for each file
yes '' | jigdo-lite "$jigdo_file"
done
This version of the script employs the yes
command to repeatedly send an empty string, simulating continuous "Enter" presses until all Jigdo prompts are satisfied. This method is useful if the number of prompts varies or if additional confirmations are required during the download process.
For Zsh (using printf
for exactly two "Enter" presses):
#!/bin/zsh
# Directory containing all .jigdo and .template files
JIGDO_DIR=~/Desktop/Debian/
# Navigate to the Jigdo directory
cd "$JIGDO_DIR" || exit
# Loop through all .jigdo files and execute jigdo-lite, with exactly two "Enter" key presses
for jigdo_file in *.jigdo; do
echo "Processing $jigdo_file..."
# Use 'printf' to simulate pressing "Enter" twice for each file
printf '\n\n' | jigdo-lite "$jigdo_file"
done
This version of the script uses printf
to send exactly two newline characters, simulating two "Enter" key presses. It is beneficial when only two prompts are expected, as it avoids continuous input and provides controlled interaction with the Jigdo process.
#!/bin/bash
#Directory containing all .jigdo and .template files
JIGDO_DIR=~/Desktop/Debian/
# Navigate to the Jigdo directory
cd "$JIGDO_DIR" || exit
# Loop through all .jigdo files and execute jigdo-lite, with continuous "Enter" key presses
for jigdo_file in *.jigdo; do
echo "Processing $jigdo_file..."
# Use 'yes' to simulate continuous "Enter" presses for each file
yes '' | jigdo-lite "$jigdo_file"
done
#!/bin/bash
# Directory containing all .jigdo and .template files
JIGDO_DIR=~/Desktop/Debian/
# Navigate to the Jigdo directory
cd "$JIGDO_DIR" || exit
# Loop through all .jigdo files and execute jigdo-lite, with exactly two "Enter" key presses
for jigdo_file in *.jigdo; do
echo "Processing $jigdo_file..."
# Use 'printf' to simulate pressing "Enter" twice for each file
printf '\n\n' | jigdo-lite "$jigdo_file"
done
Choose the script version based on the expected interaction with Jigdo prompts. The yes
command version is suitable for continuous responses, while the printf
version provides a precise number of responses.
chmod +x ~/Desktop/download_isos.zsh
~/Desktop/download_isos.zsh
Replace with download_isos.sh
for Bash.
To facilitate Debian installation from an SD card, the ISO file must be properly written to the SD card. This guide provides a brief overview for Windows users utilizing Rufus, followed by detailed steps for macOS users using built-in tools.
Download Rufus from the official website. Rufus is a straightforward tool for creating bootable media from ISO files on Windows.
This section provides detailed steps to create a bootable Debian SD card on macOS without third-party software.
Insert the SD card and open Terminal. Use the following command to display all drives and identify the SD card by its size:
diskutil list
Note the identifier for the SD card (such as /dev/disk9
).
Unmount the SD card with the command:
diskutil unmountDisk /dev/disk9
Next, erase and format the SD card:
sudo diskutil eraseDisk FAT32 BOOT MBRFormat /dev/disk9
The command eraseDisk
initiates the process of removing all existing data on the SD card. The FAT32
parameter specifies the filesystem to be used, ensuring compatibility across various operating systems. The label BOOT
names the new partition, and MBRFormat
sets the partition scheme to Master Boot Record, which is suitable for booting purposes.
/dev/disk0 (internal, physical): #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *2.0 TB disk0 1: Apple_APFS_ISC Container disk1 524.3 MB disk0s1 2: Apple_APFS Container disk3 2.0 TB disk0s2 3: Apple_APFS_Recovery Container disk2 5.4 GB disk0s3 /dev/disk3 (synthesized): #: TYPE NAME SIZE IDENTIFIER 0: APFS Container Scheme - +2.0 TB disk3 Physical Store disk0s2 1: APFS Volume Macintosh HD - Data 807.5 GB disk3s1 2: APFS Volume Macintosh HD 10.8 GB disk3s3 3: APFS Snapshot com.apple.os.update-... 10.8 GB disk3s3s1 4: APFS Volume Preboot 12.3 GB disk3s4 5: APFS Volume Recovery 1.9 GB disk3s5 6: APFS Volume VM 2.1 GB disk3s6 /dev/disk4 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme +10.4 GB disk4 1: Apple_APFS Container disk5 10.4 GB disk4s1 /dev/disk5 (synthesized): #: TYPE NAME SIZE IDENTIFIER 0: APFS Container Scheme - +10.4 GB disk5 Physical Store disk4s1 1: APFS Volume watchOS 10.5 21T575 ... 10.1 GB disk5s1 /dev/disk6 (disk image): #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme +17.6 GB disk6 1: Apple_APFS Container disk7 17.6 GB disk6s1 /dev/disk7 (synthesized): #: TYPE NAME SIZE IDENTIFIER 0: APFS Container Scheme - +17.6 GB disk7 Physical Store disk6s1 1: APFS Volume iOS 17.5 21F79 Simul... 17.0 GB disk7s1 /dev/disk9 (internal, physical): #: TYPE NAME SIZE IDENTIFIER 0: FDisk_partition_scheme *32.0 GB disk9 1: Windows_FAT_32 bootfs 268.4 MB disk9s1 2: Linux 31.7 GB disk9s2
The output of the diskutil list
command provides a detailed view of the storage devices connected to the MacBook. Below is an interpretation of each entry:
This device, with a capacity of 2.0 TB, represents an internal and physical disk, meaning it is permanently installed within the MacBook rather than being a removable or virtual drive. The disk utilizes the GUID Partition Scheme and contains the following partitions:
This entry signifies a synthesized APFS container generated by macOS, encompassing various volumes related to /dev/disk0
. It includes multiple essential system partitions such as:
These entries correspond to disk images, likely representing mounted virtual drives or other macOS system images:
disk4
and disk6
represent GUID partition schemes for disk images.disk5
and disk7
are synthesized from disk4
and disk6
, respectively, containing APFS volumes for watchOS and iOS simulators.This 32.0 GB device is marked as both "internal" and "physical," indicating a physically removable medium recognized as part of the MacBook's internal hardware interface, such as an SD card slot. It employs an FDisk partition scheme, commonly associated with devices formatted for compatibility across various operating systems. The two partitions present are:
Windows_FAT_32
partition, typical for broader compatibility with Windows systems, often used in boot setups.Linux
partition, suggesting prior use for a Linux-based operating system or data storage.The identifying characteristics of /dev/disk9
—a size of 32.0 GB, an FDisk partition scheme, and a removable nature—indicate this is the SD card. Such removable drives are recognized as "internal, physical" due to their connection through a built-in card reader or slot, contrasting with virtual or purely internal SSDs and HDDs that are non-removable.
macOS requires the ISO to be in .img
format for the dd
utility. Convert the Debian ISO by running:
hdiutil convert -format UDRW -o ~/Desktop/debian.img ~/Desktop/debian-12.7.0-arm64-DVD-1.iso
The hdiutil
command-line tool is utilized for working with disk images in macOS. The UDRW
format specifies an uncompressed read/write image, which is necessary for the subsequent dd
operation. The output location is set to ~/Desktop/debian.img
, and the source file is ~/Desktop/debian-12.7.0-arm64-DVD-1.iso
.
If macOS appends .dmg
to the output file (resulting in debian.img.dmg
), this extension remains acceptable for the next step.
dd
:Use dd
to transfer the IMG file to the SD card. The dd
utility performs a low-level copy of data from one location to another. The parameters used are:
if=~/Desktop/debian.img.dmg
: Specifies the input file.of=/dev/disk9
: Specifies the output file (the SD card).bs=1m
: Sets the block size to 1 megabyte, optimizing the copy process for speed.The command to execute is:
sudo dd if=~/Desktop/debian.img.dmg of=/dev/disk9 bs=1m
Prior to executing this command, it is necessary to unmount the SD card using diskutil unmountDisk /dev/disk9
. This ensures that no other processes are accessing the disk, preventing potential data corruption during the write operation.
Be cautious when using dd
, as it can overwrite any specified drive without warning. This process can take a few minutes; no progress is shown by default.
Once the process is complete, safely eject the SD card with:
diskutil eject /dev/disk9
The SD card is now prepared as a bootable Debian installation medium.
Configuring Debian to utilize local ISO files as repositories enhances package management efficiency, particularly in environments with limited or unreliable internet connectivity. This guide outlines the process of mounting multiple ISO files, updating the package manager’s sources list to prioritize local repositories, and automating the mounting process for sustained convenience.
A fundamental step involves creating directories designated for mounting each ISO file. Organizing these directories under /media
maintains system orderliness.
sudo mkdir -p /media/debian-iso{1..21}
mkdir -p
: Creates the specified directories along with any necessary parent directories./media/debian-iso{1..21}
: Generates directories named /media/debian-iso1
through /media/debian-iso21
.Automating the mounting process ensures efficiency when handling multiple ISO files. A Bash script is employed to mount each ISO to its corresponding directory as a loop device.
emacs ~/mount_debian_isos.sh -nw
#!/bin/bash
# Base directory where ISO files are located
ISO_DIR=/home/frank/Downloads
MOUNT_DIR=/media
# Loop through all 21 ISOs and mount them
for i in {1..21}; do
ISO_FILE="$ISO_DIR/debian-12.7.0-arm64-DVD-$i.iso"
MOUNT_POINT="$MOUNT_DIR/debian-iso$i"
if [ -f "$ISO_FILE" ]; then
echo "Mounting $ISO_FILE to $MOUNT_POINT..."
sudo mount -o loop,ro "$ISO_FILE" "$MOUNT_POINT"
else
echo "Warning: $ISO_FILE does not exist."
fi
done
#!/bin/bash
): Specifies that the script should be executed in the Bash shell.ISO_DIR
: Directory containing the ISO files. Modify this path if the ISOs are stored elsewhere.MOUNT_DIR
: Base directory for mounting the ISOs.loop
: Mounts the ISO as a loop device.ro
: Mounts the ISO as read-only to prevent modifications.chmod +x ~/mount_debian_isos.sh
~/mount_debian_isos.sh
The mount
command is versatile and used across various scenarios, from mounting ISO files to accessing network drives and USB devices. Below are several common examples of the mount
command, demonstrating frequently used options and configurations:
1. Mount an ISO File as a Loop Device: This example mounts an ISO file as a read-only loop device using the ISO 9660 filesystem type.
sudo mount -t iso9660 -o loop /home/frank/Downloads/debian-9.5.0-amd64-DVD-1.iso /media/d1
2. Mount a USB Drive Automatically: Linux often automatically recognizes and mounts USB drives to /media/username/DRIVENAME
. However, manual mounting is also possible:
sudo mount /dev/sdb1 /mnt/usb
lsblk
or fdisk -l
.sudo mkdir -p /mnt/usb
.3. Mount a Windows NTFS Drive: For dual-boot systems, accessing Windows partitions from Linux may require specifying the NTFS filesystem type.
sudo mount -t ntfs-3g /dev/sda1 /mnt/windows
4. Mount a Network Share (NFS): Network File System (NFS) is widely used for accessing remote file systems across a network.
sudo mount -t nfs 192.168.1.100:/shared-folder /mnt/nfs
5. Mount a CIFS (Windows/Samba) Network Share: CIFS (Common Internet File System) is a network protocol that allows access to shared folders from Windows or Samba servers.
sudo mount -t cifs -o username=frank,password=yourpassword //192.168.1.101/shared-folder /mnt/cifs
6. Mount a Disk Partition as Read-Only: For forensic or data recovery purposes, mounting a partition in read-only mode prevents any accidental modifications.
sudo mount -o ro /dev/sdc1 /mnt/readonly
7. Mount a Bind Directory (Make One Directory Accessible at Another Path): The bind
option allows one directory to be mounted at another path, effectively mirroring its contents.
sudo mount --bind /var/www/html /mnt/website
/var/www/html
will be accessible.sources.list
to Include Local and Online RepositoriesConfiguring the APT package manager to prioritize local ISO repositories while retaining the ability to access online sources involves editing the sources.list
file. The order of entries dictates the priority, with earlier entries being preferred.
sources.list
:sudo emacs /etc/apt/sources.list
Append the following lines to the end of the file. Ensure that bookworm
is the correct codename for the Debian release in use. Adjust accordingly if a different release is active.
# Local Debian ISO repositories
deb [trusted=yes] file:/media/debian-iso1/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso2/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso3/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso4/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso5/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso6/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso7/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso8/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso9/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso10/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso11/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso12/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso13/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso14/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso15/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso16/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso17/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso18/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso19/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso20/ bookworm main contrib
deb [trusted=yes] file:/media/debian-iso21/ bookworm main contrib
# Online Debian repositories
deb http://deb.debian.org/debian bookworm main contrib
deb-src http://deb.debian.org/debian bookworm main contrib
deb http://deb.debian.org/debian-security bookworm-security main contrib
deb-src http://deb.debian.org/debian-security bookworm-security main contrib
deb http://deb.debian.org/debian bookworm-updates main contrib
deb-src http://deb.debian.org/debian bookworm-updates main contrib
deb [trusted=yes] file:/media/debian-isoX/ bookworm main contrib
: Specifies each local ISO as a trusted repository. The trusted=yes
option bypasses signature verification; ensure ISOs are obtained from official sources to maintain security.deb
) and source (deb-src
) repositories are included as needed.Refreshing the APT package database allows the system to recognize the newly configured repositories.
sudo apt update
This command updates the package index, enabling APT to acknowledge packages available from both local ISO repositories and online sources.
Packages can now be installed using APT, with the system prioritizing local ISO repositories before consulting online sources.
sudo apt install <package_name>
Replace <package_name>
with the desired package to install.
apt-get install build-essential linux-headers-$(uname -r)
# Raspberry Pi
apt-get install build-essential gcc raspberrypi-kernel-headers raspberrypi-kernel
apt-get install build-essential emacs raspberrypi-kernel-headers git bc bison flex libc6-dev libncurses5-dev make
sudo apt install crossbuild-essential-armhf
To ensure that all ISO files are mounted automatically upon system boot, entries can be added to the /etc/fstab
file. This guarantees that the local repositories are available without manual intervention each time the system starts.
/etc/fstab
:sudo emacs /etc/fstab
Append the following lines to the end of the file.
# Mount Debian ISO repositories
~/Downloads/debian-12.7.0-i386-DVD-1.iso /media/debian-iso1 iso9660 loop,ro 0 0
~/Downloads/debian-12.7.0-i386-DVD-2.iso /media/debian-iso2 iso9660 loop,ro 0 0
# ...
~/Downloads/debian-12.7.0-i386-DVD-21.iso /media/debian-iso21 iso9660 loop,ro 0 0
iso9660
): Standard filesystem for ISO images.loop
: Mounts the file as a loop device.ro
: Mounts the ISO as read-only.0 0
): Commonly set to 0 0
for ISO mounts, indicating no dump and no filesystem check.sudo mount -a
This command mounts all filesystems specified in /etc/fstab
without necessitating a system reboot.
Before diving into device and kernel programming, it is crucial to understand the core system information. The following commands offer insights into the Linux system's hardware, kernel, and architecture.
The hostnamectl
command provides a basic overview of the system, including the hostname, operating system, and kernel version.
# System Information
hostnamectl
The uname
command displays kernel and architecture information.
# Kernel Version
uname -r
# Machine Architecture
uname -m
uname -r
: Displays the kernel version, which is essential for module development or custom kernel builds.uname -m
: Reveals the machine architecture (e.g., x86_64 or arm64), ensuring compatibility with drivers and modules.To check the Linux distribution and version details:
# Linux Distribution and Version
lsb_release -a
Detailed CPU information can be obtained using lscpu
or by inspecting /proc/cpuinfo
.
# Human-readable CPU Information
lscpu
# Detailed CPU Information
cat /proc/cpuinfo
lscpu
: Provides CPU architecture, cores, threads, and cache sizes in a human-readable format.cat /proc/cpuinfo
: Offers detailed CPU information, useful for performance benchmarking when developing CPU-bound kernel modules.Linux exposes detailed information about devices and their drivers through the /sys
and /proc
filesystems, as well as utility commands.
To check the drivers associated with devices and view system buses:
# List PCI Devices with Kernel Modules
lspci -k
# View Input Devices
cat /proc/bus/input/devices
lspci -k
: Lists PCI devices along with their associated kernel modules.cat /proc/bus/input/devices
: Displays information about input devices connected to the system.The /sys
filesystem provides a way to interact with kernel objects.
For example, to check the status of a network interface:
# Network Interface Status
cat /sys/class/net/eth0/operstate
To list available filesystems supported by the kernel:
# Available Filesystems
cat /proc/filesystems
Understanding the hardware landscape and relationships between different buses and devices is essential for advanced device programming.
The lspci
command is used to inspect PCI devices:
# PCI Devices in Tree View
lspci -tv
# PCI Devices with Kernel Modules
lspci -k
# Verbose PCI Device Information
lspci -vv
# Filter PCI Devices (e.g., USB Controllers)
lspci -v | grep USB
lspci -tv
: Shows a hierarchical tree of PCI devices and their connections.lspci -k
: Links kernel modules to devices, verifying which drivers are in use.lspci -vv
: Provides detailed device information, such as capabilities, power management, and interrupt settings.lspci -v | grep USB
: Filters specific device classes, like USB controllers.For USB device driver development:
# USB Devices in Tree View
lsusb -tv
# Verbose USB Device Information
lsusb -v
# Inspect Specific USB Device
lsusb -d <vendor_id:product_id>
lsusb -tv
: Provides a tree view of connected USB devices.lsusb -v
: Gives verbose information about each USB device, including descriptors, vendor IDs, and product IDs.lsusb -d <vendor_id:product_id>
: Inspects a specific USB device.For debugging or interacting with USB devices programmatically, tools like usbmon
and Wireshark can capture USB traffic.
To view block devices such as disks and partitions:
# List Block Devices
lsblk
# List Block Devices with Filesystem Information
lsblk -f
lsblk
: Provides a clear hierarchical structure of block devices.lsblk -f
: Adds filesystem and UUID information, useful for understanding mounted devices and their properties.For low-level inspection of SCSI devices:
# List SCSI Devices
lsscsi
lsscsi
: Lists all SCSI devices connected to the system.Tracking interrupts and I/O performance is vital for optimizing system interactions with hardware.
To view the number of interrupts per CPU for each I/O device:
# Interrupts per Device
cat /proc/interrupts
This is useful when optimizing interrupt handling or diagnosing hardware IRQ conflicts.
The iostat
and iotop
commands help monitor I/O device performance:
# Extended I/O Statistics (updates every second)
iostat -x 1
# Block Device Statistics
iostat -d 1
# Monitor I/O Usage by Process
iotop
iostat -x
: Provides extended statistics like utilization, throughput, and wait time.iostat -d
: Focuses on block devices, updating statistics every second.iotop
: Displays real-time I/O usage by process, useful for identifying processes performing heavy I/O operations.Memory management and caching are critical when programming kernel modules or drivers.
To view detailed memory statistics:
# Memory Information
cat /proc/meminfo
This provides information such as total memory, free memory, available swap, and cached memory.
To list memory cache information:
# Cache Information
sudo lshw -C memory
lshw -C memory
: Displays information about the system's memory hierarchy, including caches.Delving into specific hardware capabilities is facilitated by tools for inspecting buses, devices, and subsystems.
The lshw
command provides detailed information about the hardware configuration.
# Hardware Path Tree View
sudo lshw -short
# Class-specific Information
sudo lshw -short -C bus
sudo lshw -short -C cpu
sudo lshw -short -C storage
# Bus Information
sudo lshw -businfo
lshw -short
: Provides a concise list of hardware.lshw -short -C [class]
: Focuses on specific classes like bus, cpu, or storage.lshw -businfo
: Shows how devices are connected to various buses.Device-specific information can be extracted using hwinfo
and other utilities.
To list all supported filesystems in the kernel:
# Supported Filesystems
cat /proc/filesystems
This is useful when developing storage drivers or working with filesystems.
The hwinfo
command provides detailed information about hardware components.
# Concise Hardware Summary
sudo hwinfo --short
# Detailed USB Information
sudo hwinfo --usb
hwinfo --short
: Gives a concise summary of all detected devices.hwinfo --usb
: Provides detailed information about USB devices, including vendor and product IDs.For storage systems or SCSI devices, in-depth tools are necessary to inspect and configure devices.
To show the hierarchy of SCSI devices:
# SCSI Device Hierarchy
lsblk -s
lsblk -s
: Displays block devices in reverse dependencies, showing the relationship between devices.To obtain detailed SMART information for storage devices:
# SMART Information
sudo smartctl -a /dev/sda
smartctl
: Provides disk health, performance data, and potential failure indicators.Real-time monitoring of device and kernel activity is essential for performance tuning and debugging.
The top
or htop
commands can be used to monitor processes and system load:
# Real-time Process Monitoring
top
# Enhanced Process Monitoring
htop
top
: Displays system processes and resource usage.htop
: An interactive process viewer with more details and a user-friendly interface.To monitor block device I/O:
# Block Device I/O Statistics
iostat -d
iostat -d
: Shows I/O statistics for block devices.To monitor network interfaces:
# Real-time Network Interface Monitoring
iftop
iftop
: Displays bandwidth usage on network interfaces.Tools like perf
are used to monitor kernel and application performance, helping identify bottlenecks.
To monitor CPU performance, system calls, and events:
# Live CPU Performance Monitoring
sudo perf top
To record a performance profile:
# Record Performance Data
sudo perf record -a
To display the recorded performance data:
# Report Performance Data
sudo perf report
perf top
: Provides a real-time view of system performance.perf record
: Collects performance data over time.perf report
: Analyzes and displays the collected data.Kernel modules extend system functionality without requiring a reboot and are commonly used in device driver development.
To list all loaded kernel modules:
# List Loaded Kernel Modules
lsmod
To load a kernel module manually:
# Load a Kernel Module
sudo modprobe <module_name>
To unload a kernel module:
# Unload a Kernel Module
sudo modprobe -r <module_name>
To get detailed information about a specific module:
# Module Information
modinfo <module_name>
This provides information such as module parameters, dependencies, and author.
Kernel headers are required when building or debugging modules:
# Install Kernel Headers
sudo apt-get install linux-headers-$(uname -r)
Kernel logs are essential for tracking system errors and debugging device driver issues.
To view real-time kernel logs:
# Real-time Kernel Logs
dmesg -w
Access kernel logs through the systemd journal:
# Kernel Logs via systemd journal
journalctl -k
To filter logs and focus on specific messages:
# Filter Kernel Logs
dmesg | grep <keyword>
Replace <keyword>
with the module name or any relevant term to isolate specific log messages.
Monitoring how kernel modules interact with hardware is crucial for device driver development.
To locate messages related to a specific kernel module:
# Module-related Kernel Messages
dmesg | grep <module_name>
This assists in debugging issues like module loading or initialization.
Kernel module development often requires building, inserting, and testing custom modules.
To compile the kernel module for the currently running kernel:
# Compile Kernel Module
make -C /lib/modules/$(uname -r)/build M=$(pwd) modules
To insert the compiled module into the running kernel:
# Insert Kernel Module
sudo insmod mymodule.ko
To remove the kernel module:
# Remove Kernel Module
sudo rmmod mymodule
Note: Use the module name without the .ko
extension when removing.
For kernel module developers, debugfs
and ftrace
are invaluable for exposing internal kernel data and tracing function calls.
To mount the debugfs filesystem:
# Mount debugfs
sudo mount -t debugfs none /sys/kernel/debug
To trace function calls or events in the kernel:
# Set the Current Tracer to 'function'
echo function | sudo tee /sys/kernel/debug/tracing/current_tracer
To view the trace output:
# View Tracing Output
cat /sys/kernel/debug/tracing/trace
This allows tracing of function calls, which is invaluable for kernel debugging.
Kernel parameters can be inspected and tweaked using the /proc/sys
directory or via the sysctl
command.
To view all kernel parameters:
# View All Kernel Parameters
sysctl -a
To modify a kernel parameter (e.g., increasing the maximum number of open files):
# Increase Maximum Open Files
sudo sysctl -w fs.file-max=100000
To make the change permanent, add the parameter to /etc/sysctl.conf
.
Building custom kernels or kernel modules is sometimes necessary when developing for the Linux kernel.
To configure and build a custom kernel:
# Configure Kernel Options
make menuconfig
# Compile the Kernel
make
# Install the Kernel and Modules
sudo make modules_install install
For building and inserting custom kernel modules:
# Compile Kernel Module
make -C /lib/modules/$(uname -r)/build M=$(pwd) modules
# Insert Kernel Module
sudo insmod mymodule.ko
# Remove Kernel Module
sudo rmmod mymodule
Linux kernel programming on the Raspberry Pi presents unique challenges and considerations compared to traditional desktop or server environments. The Raspberry Pi, being a single-board computer based on the ARM architecture, requires specific approaches for kernel development, module compilation, and device driver integration. This guide explores the differences, methodologies, and best practices for effective kernel programming on the Raspberry Pi.
sudo apt-get install raspberrypi-kernel-headers
build-essential
, gcc
, make
, and git
.sudo apt-get install gcc-arm-linux-gnueabihf
ARCH=arm
CROSS_COMPILE=arm-linux-gnueabihf-
KERNELDIR=/path/to/kernel/sources
make
command with appropriate flags:
make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf-
.ko
module file to the Raspberry Pi.insmod
or modprobe
:
sudo insmod module_name.ko
rmmod
:
sudo rmmod module_name
lsmod
and dmesg
for logs.git clone --depth=1 https://github.com/raspberrypi/linux
cd linux
KERNEL=kernel7
make bcm2709_defconfig
menuconfig
or xconfig
:
make menuconfig
make
with appropriate flags for cross-compilation.
make -j4 zImage modules dtbs
sudo make modules_install
sudo cp arch/arm/boot/zImage /boot/kernel7.img
sudo cp arch/arm/boot/dts/*.dtb /boot/
sudo cp arch/arm/boot/dts/overlays/*.dtb* /boot/overlays/
sudo cp arch/arm/boot/dts/overlays/README /boot/overlays/