I have made these programs (scripts mostly) publicly available, and you are free to copy, modify and or re-distribute them. Please do not remove my name from any of them. If you make modifications, or have suggestions please mail them back to me so that others may share them.
See my notes in "Password Input", and the specific evolution of this script in "Password Reading using Bash - Ask Pass Stars".
This script has options to let it handle password caching in the Linux Kernel Keyring. (See my notes about "Linux Kernel Keyring", though I have found it better to have calling scripts handle the password caching themselves (see the following programs).
Any small shell script can be stored, and used as needed, and options are provided to let you store, view, change the function, or change the password of the encrypted function. All other options are passed to the encrypted function when decrypted and executed.
Useful for storing secret password generators. For example to convert a URL into a password for a web site, or let you use a different root password on each machine, based on the machines name.
You can set a TTY_ASKPASS environment variable pointing to a password helper script, like "askpass_stars" above, though it is not required.
Script can also make use of password caching (if enabled) to allow you to re-run the program without needing to re-type the password, at least for a short period of time.
This became especially difficult after openssl v1.1.0 when the password hashing 'digest' default changed (from 'md5' to 'sha256'). Also with the implementation of PBKDF2 password hashing in v1.1.1 (specifically the number of hashing iterations used) the need to save this metadata became crucial.
Basically the ONLY thing "openssl" saves with the encrypted file is some file identification 'magic', and the random 'salt' it generated for that encryption. This is not enough information to correctly decrypt a openssl encrypted file (beyond the password that is). That is all the public metadata associated with the encryption is not included, which means it could be lost if whatever encrypted it changes (as openssl did with its default options).
The "keepout" header saves the openssl options used, and is straight forward and simple to understand, you can easily add or modify options that have changed in the "openssl" command, using a binary savvy editor (like "vim") if needed. For example, you can prepend a header to OLD encryptions, with the appropriate (no longer the default) options needed to decrypt those files, turning them into a 'keepout' encrypted file. You can even do this to plain text files!
If the encrypted files are saved with a ".kpt
" suffix (this is
not coded in the script), you can use the VIM
Autocmd's (see below), to allow you to edit encrypted files directly,
with password caching so as to save the file with the same password again.
The script in many ways is similar to what "aespipe
" did,
though the wrapper (for the "aespipe" program) did not do a complete job,
nor was it as future proof, or versatile at "keepout".
However this version uses the PBKDFv2 password to derive the encryption key from user passphrase. This is a lot safer than simply using the OpenSSL "enc" option to do a 'Salted' file encryption, which does only a single iteration to derive the encryption key from the user provided password. By using PBKDFv2 to iterated the encryption key derivation, you effectively slow down brute force dictionary attacks making it less practical, without sacrificing the normal usage of the pass-phrase to encryption key hashing.
See the script header for more information.
If the encrypted files are saved with a ".enc
" suffix (not
hard coded into this script), you can use the VIM
Autocmd's (see below), to allow you to edit encrypted files, and making
use of password caching to remember the password for saving it again.
Note: This program predates "openssl v1.1.1" which now provides, and recommends, the use of a "-pbkdf2" option. That was the original reason this script was created, effectively obsoleting it. It has basically been superseded by the simpler "keepout" wrapper around the newer 'OpenSSL enc' command.
vim
auto-commands" I use to let me directly
edit files encoded or encrypted in various ways (Gzip, GPG, OpenSSL, and my
encrypt and keepout
scripts), based on the filename suffix. My own scripts make use of the kernel password caching to remember the password that was used when a file was decrypted for editing, and re-using the same password to re-encrypt the file again. That way I do not need to repeatedly type it in when saving the file multiple times while editing. Something that has caused me a number of mishaps due to 'finger memory' mistakes, and the resulting loss of the data. The cache is automatically erased after 30 minutes, or at the end of the editing session.
|
Note 'readable' means it does not contain characters that could be misinterpreted, especially when printed. EG no 'Il' chars, or 'O0Q' chars.
|
Using '-c' option is provided to generate clean, XKCD or '4words' style, of password.
While an '-o' option forces compatibility with an 'old style' password policies (length, numbers, symbols, capitals), still required by most websites.
I have found a few website that have unreasonably small length limits, making me wonder how they are storing Passwords! One of these, a bank, actually accepts longer passwords, but silently truncates that password beyond a length of 8, unless you go to the bank personally to reset the password to use a more modern scheme!
For more information see Password Generation
The 'key files' are and encrypted file using a user supplied password, (using "keepout", see above) and contain the actual (generally completely randomised) binary master key for encrypted file system, as well as the commands, and configuration data needed for the decryption.
The 'key files' are stored in filenames that look like EncFS files and so can be interleaved into a real directory of a fake EncFS filesystem to further protect the fact you are using such a keystore. The 'key store' (directory where 'key files' are stored) can be physically separated from the actual encrypted data (on USB sticks, or away from network mounts), making it more secure (two factor). For EncFS and CryFS, the configuration file is also stored in the 'key file', so it is not stored with the data. This is one of the major criticisms of using EncFS with a cloud based storage provider. Basically even the 'public' details of the encryption of the stored data is secured. I don't believe in giving a cracker any public help if I can.Fake 'key files' (name/password pairs) can be added to the 'key store'. These keys can be made to decrypt other data, probably from the same location (interleaved data), or even be made to destroy the access to the real data, to further confuse would be attackers. It means you can give up a password to fake, or less important data, without comprising the real data, creating plausible deniability and prevent rubber hose attacks. Basically a key file could decrypt something else, or run any command!
The 'key files' could instead of holding a master key and configuration data, can be used to hold some other text data. For example passwords for various websites, or your mother's secret sauce recipes.
One example is for the 'key file' data to be a complex executable shell script or even a binary program, that can do other things that you want to keep secret. For example a shell script that holds the password and procedure to access to a ultra secure web site. You then never need to see, remember the details!
Comments welcome.
I previously used this script extensively from command line, shell scripts, and GUI application launchers, menus, and filesystem mount programs, to mount encrypted filesystems given a user password, without needing root or sudo access.
However its use by me has since been superseded by EncFS and the "ks" script above. EncFS allows me to directly back up and/or file synchronize the encrypted data between machines, without requiring anything to be decrypted to do so, unlike a disk encryption methods.
Bash timestamps are modified so as to include a IEEE format date-time, making them more readable, without effecting BASH reading of the hostory file after editing is complete.
In some ways this is like the program "boxes" which allows for some very fancy ASCII Art boxing of text, but which does NOT understand even a simple box made from Unicode characters that this script provides!
|
|
|
The above example does not really show the use of color and bold effects in the output.
|
|
This is useful to prevent network commands taking too long waiting for slow remote servers when the information is not that important. For example when getting a hostname from a network IP, or disk quota when the file system is on a remote NFS server that is down.
The script is completely 'Bourne Shell' based, and uses some very complex scripting tricks to allow it to, exit immediately the command does, without any 'sleep interval' pauses, or leaving behind a long running sleep command. For more details of its development see my notes in "Shell Script Hints, and the section "Command Timeout".
Linux machines often have a C version, also called 'timeout
',
but that is not always available on non-linux machines, and that is where
this program fills the gap.
See my notes in "Shell Script Hints, and the section "Command Timeout".
For network programming it can also shutdown one side of a network connection. For example to send EOF to server, while still recieving the final result of the send.
The command came out of a stack exchange discussion. Bash read-write file descriptors seek, as a alternative to compiling a C program.
It has worked for me for more than 30 years! And I have used it on Sun3, Sun4, Ultrix, Solaris, Linux, MacOSX, with bourne shells, dash, bash, ksh, and zsh. It should work for any Unix-like environment.
Technically locating a running script has no solution, as it could be a piped into a shell, but in practice it does work. See BASHFAQ (28)
A useful information gathering about the program being run. Especially when you plan to later use that command in a shell script, or for co-processing.
Note getting the exit status of a command while also piping its output is generally difficult in older shells. This script was originally a demonstration on how this can be achieved in original bourne shell.
A simple "recover" script can be used to list, and restore specific files and directories, from any of the backup 'cycles'.
For more information see Rsync Backups, and Snapshoting
Files are only size tested initially, with full comparison be performed when a possible match has been found, making this re-hardlinking program very fast. Its complexity is its algorithm for attempting to merge two separate hardlink groups of the same file. Only when all the files of two hardlink groups finally merged together as a single hardlinked group, is disk space saved, so it goes to great effort to find all such files.
The primary purpose of this program is to attempt to re-link files that were moved or renamed in "rsync" backups. This program can thus make incredible disk space savings by restoring the hardlinking between duplicate files. This commonly happens if a directory is renamed, causing the hardlinks in a later rsync backups to not be made, even though the file itself is untouched (just the directory path).
This was needed to remove the hardlinks from files that should not have been hardlinked together. Specifically files in my working home directory that are temporary backups or revisions, configuration files, or SVN copies. This allows the 'separated' files to be able to be edited independently from each other, without a 'vi' or 'cp' modifying ALL the backup copies (revisions).
If this script is renamed to be "mv_reseq
", it can then be
used to re-sequence all the numbers, so as to remove any gaps, or spread
out the numbers so as to add gaps to the sequence. This can be useful to
insert and re-arrange the numbered order of the files.
I use both forms of the script quite regularly when dealing with numbered files.
If the script is linked/copied to the filename "cp_perl
" or
"ln_perl
", then it will copy or symbolically link files to
the new filename rather than move or rename them.
Built in perl expressions have been included to rename files to: all lowercase, all uppercase, capitalise words, remove punctuation, replace spaces with underscores, and visa-versa, and many more common file renaming.
These can accessed by linking the script to appropriate "mv_*" names (see
internal documentation). For example if the script is linked/copied to
the command name "mv_lcase
", than that command will rename
the given filenames to lowercase.
This script was originally based on a common perl renaming script, the core of which was originally created by Larry Wall, the creator of perl. Many variants exist including "mmv" on many linux machines, and under Debian Linux, "rename".
I find it amazing how often downloaded files have the wrong suffix.
ASIDE: The linux "systemd-tmpfiles" program with its user level "tmpfiles.d" configuration files, can do something similar via its 'z' and 'Z' types (adjust mode, glob, recursive). It does not however make distinctions between data files, executables, and directories.
Each "printf" number substitution ("%d", "%x", %f etc.) found in the {format_string} is replaced by a number from the comma separated argument ranges, specifying the start,inc,end for that substitution. Sequences can be reversed.
You can have an many "printf" substitutions as you like.
Count up and/or down
Note that the last substitution is incremented before the first
multi_seq "file_%d_%d" 3 3,1
Instead of decimal you can also count in hex (as per perl conventions)
multi_seq "...%02x..." 0x2a,0x4f
not this sequence does include 2e,2f,30,31...
Incrementing ISO standard date (including some illegal dates)...
multi_seq "file_%d-%02d-%02d" 2010,2011 12 31
Generate all posible dates for 2010 and 2011.
The '-f' increments in day-month-year order
multi_seq -f "%02d/%02d/%d" 31 12 2010,2011
sort
" command. This was designed so it does not need to read in the whole input list into memory, instead only holding the 'current' selection from the list that it has already read. That is it has a very small memory footprint. Of course it will not output the final single random selection until it has finished reading all the input lines, as there is a possibility the last line will be the final selection.
Also see shell_select_example.sh, and shell_select_bc.sh, which are demonstration programs using "shell_select.pl" to handle both stdout
for the normal results, and stderr error output from a "bc
"
co-process. Basically bullet proofing the "bc
" regardless of
input.
For more information see my own notes on "Co-Processing, and the section on "Co-Processing with both stdout and stderr...".
Based on a similar script by Steve Parker, "Simple Expect Replacement". See my own notes on running "Co-Processing, in the section on "Timed Data Pipelines".
The exact results vary greatly depending on the terminal program (especially for colors and attributes) and on the font you are using. Though they all work perfectly for the "xterm" program. Other terminals tend to be a pale comparision to the original!
Also it seems that many of the special ANSI graphic character modes are no longer functional with the more modern UTF fonts, but then they have other methods to make use the vast UTF characters now available.
|
|
Also see UTF-8 Demo File.
To be used as the last command in a xsession or xinitrc script.
I have a similar version in my actual xsession script which on press pops up a menu of options: Poweroff, Reboot, Restart, Logout, Cancel. But the "Restart" option requires integration with the rest of the session script to allow it to kill off and restart all startup applications.
Currently it could use a re-write at this point in time to make better use of newer x window control tools.
A great deal of effort is put into ensuring multiple notifications do not popup on top of each other, but remain neatly ordered in the top-left corner of the screen.
By default it will cause the window specified to 'bounce' like a ball to highlight its existence. Other actions include 'shake' left and right, which is commonly used to indicate some error condition (bad password). Or do cirlces or jump back and forth.
There is both a xwit version and a xdotool version available. The scripts are identical, just using different window control tools.
This program should be linked to some 'key event', such as 'Meta-Print' so you can make a image selection (to find text in) at any time.
For methods of doing this see X Window Event Handling.
This program should be linked to some 'key event', such as typically provided by a window manager. For example can it when user presses a 'Meta-E' key. See X Window Event Handling.
This lets you setup special 'hot' keys that can type fixed strings (like an email address) or general text selections into ANY input box, whether it be a web browser input form, or a game input window, regardless of if it accepts a normal 'paste' or not. The application will see the string as coming from the keyboard, and not as a 'paste' which some applications do not accept. Very useful.
List the monitors simply: | xmonitor list |
Clone display to all monitors: | xmonitor clone |
Swap to next active monitor: | xmonitor swap |
Enable secondary monitor only: | xmonitor second |
Left to right order: | xmonitor right |
Left to right order (skip first): | xmonitor -skip right |
WARNING: If a monitor is not working properly, you could be left without any working display. Caution is recommended on "swap" and "second" actions. Linking a "xmonitor clone" command to a hotkey is a very good idea, just in case.
I use this program in some of my icon library scripts to gather information about GIF files, such as the exact colormap, disposal, and delay settings from GIF animations.
Also see ImageMagick Examples, Helper Scripts, and especially the "gif2anim" script. This script originally used this program for information gathering to create a ImageMagick command that can re-build the animation from its de-composed frames. However it now uses ImageMagick itself to gather this data.
Note that images can get very large for some servers, so you may want to specify a center point and limit (in terms of 512 tiles) that you want to join together.
You can specify the map set, (dimension), and if you want day, night, or specific underground level maps.
The DZI image format handles sparse tiling efficiently (no need to include empty tiles in the image), as such the full DZI image so not quite double the number of original journey map tiles used. The though script 'symlinks' the original journeymap images to the bottom-most level (maximum depth) with the appropriate image names, for direct DZI use. this avoids the need to duplicate those images, and thus saving more that 1/2 the disk space used to hold the complete DZI image. From these original times all the other 'merged' (zoomed out) layers of DZI pyramid image, are created as needed.
The script is also designed so you can just run it again (with same options, if any) to add and update new journeymap tiles, and the the zoomed out images, as needed.
FUTURE: I hope to eventually layer journeymap 'waypoints' onto the image as well so you can label specific locations.