When I started using a script to add items to my TaskPaper file,
I was a little worried about the script making changes to my file while it
was open in TaskPaper. So I used TaskPaper’s preference to save
my files every five seconds, and nothing bad happened for a while.
Then I started seeing corrupted files. It seems like OS X autosave is doing
something weird. If I poke at it, I can get parts of the file go missing,
or sometimes a dialog box pops up to complain. But everything works fine as
long as I do an actual “⌘S” save.
To prevent corruption, I added
a few lines to my shell script, which use AppleScript to save my
TaskPaper file before making the changes.
I use pgrep to check if TaskPaper is running, and a
heredoc to send the text of the script to the osascript binary.
if pgrep TaskPaper > /dev/null;then/usr/bin/osascript << EOM
tell application "TaskPaper"
repeat with Doc in documents whose name is "tasks.taskpaper"
save Doc
end repeat
end tell
EOMfi
(It is so much easier to embed AppleScript in a bash script than the other
way around.)
The most widely read post on this site is my 2012 post on scheduling tasks
using launchd. But my knowledge of launchd is limited to my
experience. In particular, I was mistaken about how to set up a task when your
computer has multiple accounts.
(For many years, my wife and I shared an account, mostly because it’s still so
difficult to switch between
accounts and properly share files. But now, with iPhones and
iCloud, it’s even more painful to share an account, so we finally split things
up.)
In my post, I wrote:
If you have multiple users and need something to
run no matter who is logged in, you should look into putting it in
/Library/LaunchAgents.
But this isn’t quite right. For system-wide jobs, there are two
folders that can
contain your Launch Agent plists: /Library/LaunchAgents and
/Library/LaunchDaemons.
The difference is that system-wide Launch Agents
run exactly like per-user
Launch Agents, except that they run once for each user. If you have two users
logged in, the system will run two instances of the Launch Agent job.
Each job will run with that user’s permissions. (This may actually
cause problems. For example, if you need to write to a file, you must use a
different file for each user or use a file that is world-writable.)
Launch Daemons, on the other hand, spawn a single instance, regardless of who is
or is not logged in. By default, these run with root permissions (be careful!),
although you can (and almost always should) customize this with the UserName key.
Here’s my new favorite way to get tasks into TaskPaper.
It’s a combination of Drafts, Dropbox,
launchd, a Python script, and
a shell script.
That sounds convoluted, but once each piece of the pipeline
is in place, I just enter one or more tasks into Drafts on my phone,
and three seconds later, it is in my TaskPaper file on my Mac.
It’s like iCloud, but without the mystery.
Merge new tasks into TaskPaper
I wrote a Python script to insert new tasks in the proper place
in my TaskPaper file. Since TaskPaper files are just plain text, this is not too
complicated.
My script reads in a text file and interprets each line as a new task. If the
task has a project tag, it removes the tag, and then it groups the tasks by
project. Anything without a project is assumed to be in the inbox. Next, it
reads my main TaskPaper file, and figures out where each project begins and
ends. Finally, it inserts each new task at the end of the appropriate project.
A shell script calls the Python script with the correct arguments, merging
my inbox.txt file into my tasks.taskpaper file, and deleting the
now-redundant inbox.txt file. Update: To avoid corrupting
my TaskPaper file, I use some AppleScript within this shell script
to first save the file if it is open.
(Of course, the Python script could have done these last steps also, but it’s much
better to make the Python script generic, so I can use it for other purposes.)
Watch inbox for changes
The next step is to automate the merging. This is where OS X’s launchd
is useful. One solution would be to run the shell script on some kind of timed
interval. But launchd is smarter than that.
Using the WatchPaths key, I can have the shell script run whenever my inbox.txt
file is modified.
Since OS X keeps an eye on all filesystem changes, this actually
has a very low overhead and means that my shell script will be run within seconds
of any modifications to inbox.txt.
Here is my Launch Agent definition, stored in a plist file in ~/Library/LaunchAgents.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"><plistversion="1.0"><dict><key>Label</key><string>net.nathangrigg.taskpaper-merge-inbox</string><key>Program</key><string>/Users/grigg/bin/taskpaper_merge_inbox.sh</string><key>StandardErrorPath</key><string>/Users/grigg/Library/Logs/LaunchAgents/taskpaper_merge_inbox.log</string><key>StandardOutPath</key><string>/Users/grigg/Library/Logs/LaunchAgents/taskpaper_merge_inbox.log</string><key>WatchPaths</key><array><string>/Users/grigg/Dropbox/Tasks/inbox.txt</string></array></dict></plist>
Drafts and Dropbox
With the hard work out of the way, I just define a custom Dropbox action in Drafts
that appends text to inbox.txt in my Dropbox folder.
With no fuss, Drafts sends the new task or tasks off to Dropbox, which dutifully
copies them to my Mac, which springs into action, merging them into my TaskPaper
file.
With so many applications and services fighting to be the solution to all of our
problems, it is refreshing to see tools that are happy solving their portion
of a problem and letting you go elsewhere to solve the rest.
I use Time Machine to back up my home iMac to a USB external hard drive.
But I don’t want the Time Machine volume mounted all of the time.
It adds clutter and slows down Finder.
I’ve been using a shell script and a Launch Agent to automatically mount
my Time Machine volume, back it up, and unmount it again.
Since this takes care of running Time Machine, I have Time Machine turned off
in System Preferences.
Shell script
The shell script used to be more complicated, but Apple has been been
improving their tools. You could actually do this in three commands:
Mount the volume (line 6).
Start the backup (line 14). The --block flag prevents the command from
exiting before the backup is complete.
Eject the volume (line 16).
Everything else is either logging or to make sure that I only eject the volume
if it wasn’t mounted to begin with. In particular, line 4 checks if the Time
Machine volume is mounted at the beginning.
Nothing complicated here. This uses launchd
to run the shell script every two hours
and capture the output to a log file.
I save this as “net.nathangrigg.time-machine.plist” in “/Library/LaunchDaemons”,
so that it is run no matter who is logged in. If you do this, you need to use
chown to set the owner to root, or it will not be run.
If you are the only one that uses your computer, you can just save it in
“~/Library/LaunchAgents”, and you don’t have to worry about changing the owner.
Either way, run launchctl load /path/to/plist to load your agent for the first time.
(Otherwise, it will load next time you log in to your computer.)
<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"><plistversion="1.0"><dict><key>Label</key><string>net.nathangrigg.time-machine</string><key>Program</key><string>/Users/grigg/bin/time-machine.sh</string><key>StandardErrorPath</key><string>/Users/grigg/Library/Logs/LaunchAgents/time-machine.log</string><key>StandardOutPath</key><string>/Users/grigg/Library/Logs/LaunchAgents/time-machine.log</string><key>StartInterval</key><integer>7200</integer></dict></plist>
Fstab
OS X will still mount your Time Machine volume every time you log in.
You can fix this by adding one line to “/etc/fstab” (which you may
need to create).
Replace the UUID with your drive’s UUID, which you can find using
diskutil info "/Volumes/Time Machine Backups". For more detailed instructions,
see this article by Topher Kessler.
Launchd is a Mac OS X job scheduler, similar to cron.
One key advantage is that if your computer is asleep at a job’s scheduled time,
it will run the job when your computer wakes up.
LaunchControl is a Mac app by soma-zone that helps manage
launchd lobs. It aims to do “one thing well” and succeeds spectacularly.
Whether you are new to writing launchd agents or you already have
some system in place, go buy LaunchControl now.
(I tried to make this not sound like an advertisement, but I failed. This
is not a paid advertisement.)
Complete control
At its core, LaunchControl is a launchd-specific plist editor.
There is no magic. You simply drag the keys you want into your document
and set their values. There is no translation layer, forcing you to guess
what to type into the app to get the functionality you know launchd provides.
It is an excellent launchd reference. Every option is fully
annotated, so you won’t have to search the man page or the internet to know what
arguments you need to specify.
Helpful hints
LaunchControl is extremely helpful. If you specify an option that doesn’t make
sense, it will tell you. If the script you want to run doesn’t exist or is not
executable, it will warn you. If you are anything like me, this will save you
four or five test runs as you iron out all of the details of a new job.
Debugging
LaunchControl also acts as a launchd dashboard.
It lets you start jobs manually.
It shows you which jobs are running, and for each job,
whether the last run succeeded or failed.
For jobs that fail, it offers to show you the console output.
This is all information you could have found on your own,
but it is very useful to have it all in one place and available when you need
it.
I’ve been kicking the tires of TaskPaper lately. I’m intrigued by its
minimalist, flexible, plain-text approach to managing a to-do list.
I have a lot of repeating tasks, some with strange intervals. For
example, once per year, I download a free copy of my credit report. But I can’t
just do it every year on January 1, because if I’m busy one year and don’t do it
until the 4th, I have to wait until at least the 4th the following year. You see
the problem. The solution is to give myself a buffer, and plan on downloading
my credit report every 55 weeks.
Taskpaper has no built-in support for repeating tasks, but its plain-text format
makes it easy to manipulate using external scripts. So, for example, I can keep
my repeating tasks in an external file, and then once a month have them inserted
into my to-do list.
The plain-text calendar tool when, which I also use to remember
birthdays, seems like the perfect tool for the job. You store your
calendar entries in a text file using a cron-like syntax. You can also
do more complicated patterns. For example, I put this line in my file:
!(j%385-116), Transunion credit report
The expression !(j%385-116) is true whenever the modified Julian day is
equal to 116 modulo 385. This happens every 385 days, starting today.
When I run when with my new calendar file, I get this output:
today 2014 Feb 22 Transunion credit report
I wrote a quick Python script to translate this into TaskPaper syntax.
#!/usr/bin/pythonimportargparsefromdatetimeimportdatetimeimportreimportsubprocessWHEN="/usr/local/bin/when"defWhen(start,days,filename):command=[WHEN,"--future={}".format(days),"--past=0","--calendar={}".format(filename),"--wrap=0","--noheader","--now={:%Y %m %d}".format(start),]returnsubprocess.check_output(command)defTranslate(line):m=re.match(r"^\S*\s*(\d{4} \w{3} +\d+) (.*)$",line)try:d=datetime.strptime(m.group(1),"%Y %b %d")exceptAttributeError,ValueError:returnlinereturn" - {} @start({:%Y-%m-%d})".format(m.group(2),d)defNextMonth(date):ifdate.month<12:returndate.replace(month=(date.month+1))else:returndate.replace(year=(date.year+1),month=1)defStartDateAndDays(next_month=False):date=datetime.today().replace(day=1)ifnext_month:date=NextMonth(date)days=(NextMonth(date)-date).days-1returndate,daysif__name__=="__main__":parser=argparse.ArgumentParser(description="Print calendar items in taskpaper format")parser.add_argument("filename",help="Name of calendar file")parser.add_argument("-n","--next",action="store_true",help="Use next month instead of this month")args=parser.parse_args()date,days=StartDateAndDays(args.next)out=When(date,days,args.filename)forlineinout.split('\n'):ifline:printTranslate(line)
This takes the when output, and translates it into something I can dump into
my TaskPaper file:
After many years of school, I now have a Real Job. Which means I need to
save for retirement. I don’t do anything fancy, just index funds in a
401(k). Nevertheless, I am curious about how my money is growing.
The trouble with caring even a little about the stock market is that all the
news and charts focus on a day at a time. Up five percent, down a percent, down
another two percent. I don’t care about that.
I could average the price changes
over longer periods of time, but that is not helpful because I’m making
periodic contributions, so some dollars have been in the account longer than
others.
What I really want to know is, if I put all my money into a savings account with
a constant interest rate, what would that rate need to be to have the same final
balance as my retirement account?
Now it’s math. A single chunk of money P with interest rate r
becomes the well-known Pert after t years.
So if I invest a bunch of amounts Pi,
each for a different ti years at
interest rate r, I get
∑ Pierti.
I need to set this equal to the
actual balance B of my account and solve for r.
At this point, I could use solve the equation using something from
scipy.optimize. But since I’m doing this for fun, I may as well
write something myself. The nice thing about my interest function is that it
increases if I increase r and decreases if I decrease r. (This is called
monotonic and is a property
of the exponential function, but is also intuitively obvious.)
So I can just pick values for r and plug them in, and I’ll
immediately know if I need to go higher or lower. This is a textbook scenario
for a binary search algorithm.
The following Python function will find when our monotonic function is zero.
1
2
3
4
5
6
7
8
9
10
11
12
from__future__importdivision# For Python 2.defFindRoot(f,lower,upper,tolerance):"""Find the root of a monotonically increasing function."""r=(lower+upper)/2whileabs(upper-lower)>tolerance:r=(lower+upper)/2iff(r)>0:upper=relse:lower=rreturn(lower+upper)/2
This will look for a root between lower and upper, stopping when it gets
within tolerance. At each stage of the loop, the difference between lower
and upper is cut in half, which is why it is called binary search, and which
means it will find the answer quickly.
Now suppose that I have a Python list transactions of pairs (amount, time),
where amount is the transaction amount and time is how long ago in years
(or fractions of years, in my case)
the transaction happened. Also, I have the current balance stored in balance.
The difference between our hypothetical savings account and our actual account
is computed as follows:
This will go through the loop about 16 times.
(log2((upper−lower)/tolerance))
The U.S. government mandates that interest rates be given as annual
percentage yield (APY), which is the amount of interest you would earn on
one dollar in one year, taking compounding into consideration. Since I have assumed
interest is compounded continuously, I should convert to APY for easier
comparison. In one year, one dollar compounded continuously becomes
er. Subtracting the original dollar, I get the
APY:
I have used Jekyll for this site ever since I first created it.
I’ve contemplated switching to something Python and Jinja based,
since I’m more much more familiar with these tools than I am with Ruby.
But there is something about Jekyll’s simple model that keeps me here.
It’s probably for the best, since it mostly keeps me from fiddling, and
there are better directions to steer my urge to fiddle.
Having said that, I couldn’t help but write one little plugin.
I wrote this so I can look up a page or post by its URL.
It is an excellent companion to Jekyll’s recent support for data files.
The plugin defines a new Liquid
tag called assign_page which works kind of
like the built-in assign tag. If you write
{% assign_page foo = '/archive.html' %}, it creates
a variable called foo that refers to object containing information
about archive.html. You can then follow with
{{ foo.title }} to get the page’s title.
The plugin code
Here is the code that I store in my _plugins folder.
moduleJekyllmoduleTagsclassAssignPage<Liquid::AssignTrailingIndex=/index\.html$/defpage_hash(context)reg=context.registerssite=reg[:site]ifreg[:page_hash].nil?reg[:page_hash]=Hash[(site.posts+site.pages).collect{|x|[x.url.sub(TrailingIndex,''),x]}]endreturnreg[:page_hash]end# Assign's Initializer stores variable name# in @to and the value in @from.defrender(context)url=@from.render(context)page=page_hash(context)[url.sub(TrailingIndex,'')]raiseArgumentError.new"No page with url #{url}."ifpage.nil?context.scopes.last[@to]=page''endendendendLiquid::Template.register_tag('assign_page',Jekyll::Tags::AssignPage)
On Line 3, you see that my AssignPage class is a subclass of Liquid’s Assign
class. Assign defines an intialize method to parse the tag, storing
the variable name in @to and the value in @from.
By not overriding initialize, I get that functionality for free.
On Line 6, I define a function that creates a hash table
associating URLs with
pages. Liquid lets you store stuff in context.registers, and Jekyll stores
the site’s structure in context.registers[:site]. Lines 10 and 11 create the
hash table and store it in context.registers so I don’t have to recreate it
for each assign_page tag. Ignoring the removal of trailing index.html,
this is the same as the Python dictionary comprehension
{x.url:xforxinsite.posts+site.pages}
Line 20 uses the hash table to look up the URL. The rest of the lines are pretty
much copied from Assign. Line 19 evaluates @from,
which lets you specify a variable containing the URL instead of just a URL.
Line 22 puts the page in the
proper variable. Line 23 is very important because Ruby functions return
the result of the last statement. Since Liquid will print our function’s
return value, we want to make sure it is blank.
Apple has a history of erasing Python’s site-packages folder during operating
system upgrades, leaving users without their third-party Python modules and
breaking scripts everywhere. Although I’ve heard that some reports
of the upgrade to 10.9 leaving things alone, mine were wiped once again.
Last year when this happened, I vowed to switch everything over to
virtualenv,
which allows you to install packages in a custom location. With
this setup, getting things working again was as easy as recreating
my local.pth file:
sudo vim /Library/Python/2.7/site-packages/local.pth
with a single line containing the path to my virtualenv site packages:
It’s a long story, but for the last six months, I have been using Vim as my primary text editor. As I began to use Vim more often, I was frustrated by the lack of a tutorial that went beyond the basics. I finally found what I was looking for in Steve Losh’s Learn Vimscript the Hard Way, which is an excellent introduction to Vim’s power features. I also discovered the real reason there are no advanced tutorials, which is that everything you need to know is contained in Vim’s help files.
Vim’s documentation is incredibly complete and very useful. Unfortunately, it makes heavy use of cross references, and the cross references only work with Vim’s internal help viewer. I have no qualms about reading a reference document, but I would strongly prefer to do this kind of reading reclining on a couch with an iPad, rather that Control+F-ing my way through a read-only Vim buffer.
I wanted a way to read and annotate the help files on my iPad. The
files were available as HTML, but annotating HTML files is complicated. There are some apps that can annotate HTML, but there is no standard or portable way to do so.
I converted the HTML files to ePub using Calibre, but Vim’s help is very dependent on having lines that are 80 characters long. This caused problems in iBooks.
So instead, I settled on the old favorite, PDF. I can easily annotate a PDF on my iPad and then move those annotations to my computer or another device. Actually, the Vim documentation was already available in PDF format, but without the internal links.
To convert the Vim help files, which are specially-formated plain text, into a hyperlinked PDF, I started with Carlo Teubner’s HTML conversion script, which takes care of the syntax highlighting and linking. I just needed a way to programmatically make a PDF file.
Latex
Latex is clearly the wrong tool for the job. I don’t need the hyphenation or intelligent line breaking that Latex excels at. All I need is to display the text on a PDF page in a monospace font, preserving whitespace and line breaks. Latex ignores whitespace and line breaks.
But Latex is what I know, and I am very familiar with the hyperref package, which can make internal links for the cross references, so I used it anyway.
I used the fancyvrb package, which allows you to preserve whitespace and special characters, like the built-in verbatim environment does, but also allows you to use some Latex commands. This allowed me to do syntax highlighting and internal hyperlinks.
At one point, I ran into an issue where Latex was botching hyphenated urls. The good people at the Latex StackExchange site figured out how to fix it. The level at which they understand the inner workings of Tex amazes me.
Last month I received my mathematics Ph.D. from the University of Washington.
My mother-in-law said my hat looked ridiculous, but I say tams are cool.
When I began this journey six years ago, my end goal was
to become a math professor. Last year, when it was time to begin applying for
jobs, I was less sure. I enjoyed the academic lifestyle, the teaching, and the
learning, but research was something I did because I was supposed to. A happy
academic has a burning desire to break new ground and make new discoveries in
their field, but I struggled to nurture my spark.
I was scared to leave academia, thinking that either I was in short-term
doldrums or that my fear of not getting a job was
affecting my judgement.
I applied for post docs, but as my academic future
became more clear, I became more sure that I needed to do something else.
So I took the plunge, withdrew my academic applications, and started a new round
of job applications. This week I started as a software engineer at a
Large Tech Company.
I’m excited for this next adventure!
I have been wanting to learn to use pyplot, but haven’t found the time.
Last week I was inspired by Seth Brown’s post from
last year on command line analytics,
and I decided to make a graph of my most common commands.
I began using zshell on my home Mac about six months ago, and I have
15,000 lines of history since then:
I compiled a list of my top commands and made a bar chart using
pyplot. Since git is never used by itself, I separated out the git subcommands. Here are the results:
Clearly, it is time to use bb as an alias for bbedit.
I already have gic and gia set up as aliases for git commit and
git add, but I need to use them more often.
Building the graph
The first step is parsing the history file.
I won’t go into details, but I used Python and the Counter class,
which takes a list and returns a dictionary-like object whose values are
the frequency of each list item.
After creating a list of commands, you count them like this:
importmatplotlib.pyplotaspltimportmatplotlibimportnumpyasnpwidth=0.6N=20ys=np.arange(N)# change the fontmatplotlib.rcParams['font.family']='monospace'# create a figure of a specific sizefig=plt.figure(figsize=(5,5))# create axes with gridaxes=fig.add_subplot(111,axisbelow=True)axes.xaxis.grid(True,linestyle='-',color='0.75')# set ymin, ymax explicitlyaxes.set_ylim((-width/2,N))# set ticks and titleaxes.set_yticks(ys+width/2)axes.set_yticklabels([x[0]forxintop_commands])axes.set_title("Top 20 commands")# put barsaxes.barh(ys,[x[1]forxintop_commands],width,color="purple")# Without the bbox_inches, the longer labels got cut off# 2x version. The fractional dpi is to make the pixel width evenfig.savefig('commands.png',bbox_inches='tight',dpi=160.1)
I still find pyplot pretty confusing. There are several ways to accomplish
everything.
Sometimes you use module functions and sometimes you create
objects. Lots of functions return data that you just throw away.
But it works!
In the time between when I read up on S3 redirects and when I
published a post on what I had learned,
Amazon created a second way to redirect
parts of S3 websites.
The first way redirects a single URL at a time. These are the redirects
I already
knew about, which were introduced last October. They are created by attaching a special piece of metadata to an S3 object.
The second way was introduced in December, and redirects based on prefix. This is probably most useful for redirecting entire folders. You can either rewrite a folder name, preserving the rest of the URL, or redirect the entire folder to a single URL. This kind of redirect is created by uploading an XML document containing all of the redirect rules. You can create and upload the XML, without actually seeing any XML, by descending through boto’s hierarchy until you find boto.s3.bucket.Bucket.configure_website.
This week I put together a Python script to manage Amazon S3’s web page redirects. It’s a simple script that uses boto to compare a list of redirects to files in an S3 bucket, then upload any that are new or modified. When you remove a redirect from the list, it is deleted from the S3 bucket.
The script is posted on GitHub.
I use Amazon S3 to host this blog. It is a cheap and low-maintenance way to host a static website, although these advantages come with a few drawbacks. For example, up until a few months ago you couldn’t even redirect one URL to another. On a standard web host, this is as easy as making some changes to a configuration file.
Amazon now supports redirects, but they aren’t easy to configure. To set a redirect, you upload a file to your S3 bucket and set a particular piece of metadata. The contents of the file don’t matter; usually you use an empty file. You can use Amazon’s web interface to set the metadata, but this is obviously not a good long-term solution.
Update: There are actually two types of Amazon S3 redirects.
I briefly discuss the other here.
So I wrote a Python script. This was inspired partly by a conversation I had with Justin Blanton, and partly by the horror I felt when I ran across a meta refresh on my site from the days before Amazon supported redirects.
Boto
The Boto library provides a pretty good interface to Amazon’s API. (It encompasses the entire API, but I am only familiar with the S3 part.) It does a good job of abstracting away the details of the API, but the documentation is sparse.
The main Boto objects I need are the bucket object and the key object, which of course represent an S3 bucket and a key inside that bucket, respectively.
The script
The script (listed below) connects to Amazon and creates the bucket object on lines 15 and 16. Then it calls bucket.list() on line 17 to list the keys in the bucket. Because of the way the API works, the listed keys will have some metadata (such as size and md5 hash) but not other (like content type or redirect location). We load the keys into a dictionary, indexed by name.
Beginning on line 20, we loop through the redirects that we want to sync. What we do next depends on whether or not the given redirect already exists in the bucket. If it does exist, we remove it from the dictionary (line 23) so it won’t get deleted later. If on the other hand it does not exist, we create a new key. (Note that bucket.new_key on line 25 creates a key object, not an actual key on S3.) In both cases, we use key.set_redirect on line 32 to upload the key to S3 with the appropriate redirect metadata set.
Line 28 short-circuits the loop if the redirect we are uploading is identical to the one on S3. Originally I was going to leave this out, since it requires a HEAD request in the hopes of preventing a PUT request. But HEAD requests are cheaper and probably faster, and in most cases I would expect the majority of the redirects to already exist on S3, so we will usually save some requests. Also, I wanted it to be able to print out only the redirects that had changed.
At the end, we delete each redirect on S3 that we haven’t seen yet. Line 40 uses Python’s ternary if to find each keys redirect using get_redirect, but only if the key’s size is zero. This is to prevent unnecessary requests to Amazon.
I posted a more complex version of the code on GitHub that has a command line interface, reads redirects from a file, and does some error handling.
How to write a shell script to delete Latex log files.
Also, why you should think about using zsh.
[Update: In addition, I reveal my complete ignorance of Bash. See the note at the end.]
I try not to write a lot of shell scripts, because they get long and complicated
quickly and they are a pain to debug. I made an exception recently because Latex auxiliary files were annoying me,
and a zsh script seemed to be a better match than Python
for what I wanted to do.
Of course, by the time I was finished adding in the different options I wanted,
Python may have been the better choice. Oh well.
For a long time I have had an alias named rmtex which essentially did
rm *.aux *.log *.out *.synctex.gz to rid the current directory of Latex
droppings. This is a dangerous alias because it assumes that all *.log files
in the directory come from Latex files and are thus unimportant.
But I’m careful and have never accidentally deleted anything (at least not
in this way). What I really wanted, though, was a way to make rmtex recurse through subsirectories,
which requires more safety.
Here is what I came up with. (I warned you it was long!)
I will point out some of the key points,
especially the useful things that zsh provides.
#!/usr/local/bin/zsh
# suppress error message on nonmatching globssetopt local_options no_nomatch
USAGE='USAGE: rmtex [-r] [-a] [foo]
Argument:
[foo] file or folder (default: current directory)
Options:
[-h] Show help and exit
[-r] Recurse through directories
[-a] Include files that do not have an associated tex file
[-n] Dry run
[-v] Verbose
'# Option defaultsfolders=(.)recurse=falseall=falsedryrun=falseverb=falseexts=(aux synctex.gz log out)# Process optionswhilegetopts":ranvh" opt;docase$opt in
r)recurse=true;; a)all=true;; n)dryrun=trueverb=true;; v)verb=true;; h)echo$USAGEexit0;;\?)echo"rmtex: Invalid option: -$OPTARG" >&2exit1;;esacdone# clear the options from the argument stringshift$((OPTIND-1))# set the folders or files if given as argumentsif[$# -gt 0];thenfolders=$@fi# this function performs the rm and prints the verbose messagesfunction my_rm {if$verb;thenfor my_rm_g in $1;doif[ -f $my_rm_g];thenecho rm $my_rm_gfidonefiif ! $dryrun;then rm -f $1fi}# if all, then just do the removing without checking for the tex fileif$all;thenfor folder in $folders;doif[[ -d $folder]];thenif$recurse;thenfor ext in $exts; my_rm $folder/**/*.$extelsefor ext in $exts; my_rm $folder/*.$extfielse# handle the case that they gave a file rather than folderfor ext in $exts; my_rm "${folder%%.tex}".$extfidoneelse# loop through foldersfor folder in $folders;do# set list of tex files inside folderif[[ -d $folder]];thenif$recurse;thenfiles=($folder/**/*.tex)elsefiles=($folder/*.tex)fielse# handle the case the the "folder" is actually a single filefiles=($folder)fifor f in $files;dofor ext in $exts;do my_rm "${f%%.tex}".$extdonedonedonefi# print a reminder at the end of a dry runif$dryrun;thenecho"(Dry run)"fi
It starts out nice and easy with a usage message.
(Always include a usage message!)
Then it processes the options using getopts.
Zsh has arrays! Notice line 20 defines the default $folders variable to be
an array containing only the current directory.
Similarly, line 25 defines the extensions we are going to delete,
again using an array.
On the subject of arrays, notice that $@ in line 59, which represents
the entire list of arguments passed to rmtex, is also an array.
So you don’t have to worry about writing "$@" to account for
arguments with spaces, like you would have to in Bash.
Lines 63 to 75 define a function my_rm which runs rm,
but optionally
prints the name of each file that it is deleting.
It also allows a “dry run” mode.
On to the deleting.
First I handle the dangerous case, which is when the -a option is given.
This deletes all files of the given extensions, like my old alias.
Notice the extremely useful zsh glob in line 82.
The double star means to look in all subdirectories for a match.
This is one of the most useful features of zsh and keeps me away from
unnecessary use of find.
In lines 93 through 117, I treat the default case.
The $files variable is set to an array of all the .tex files in a given
folder, optionally using the double star to recurse through subdirectories.
We will only delete auxiliary files that live in the same directory as a
tex file of the same name. Notice lines 98 and 100, where the
arrays are defined using globs.
In line 108, I delete each file using the substitution command
${f%%.tex} which removes the .tex extension from $f so
I can replace it with the extension to be deleted.
This syntax is also available in Bash.
My most common use of this is as rmtex -r to clean up a tree full
of class notes, exams, and quizzes that I have been working on,
so that I can find the PDF files
more easily. If I’m feeling especially obsessive, I can always run
rmtex -r ~, which takes a couple of minutes but leaves everything
squeaky clean.
[Update:
While zsh is the shell where I learned how to use arrays and advanced globs,
that doesn’t mean that Bash doesn’t have the same capabilities.
Turns out I should have done some Bash research.
Bash has arrays too!
Arrays can be defined by globs, just as in zsh. The syntax is slightly
different, but works just the same. Version 4 of Bash can even use **
for recursive globbing.
Thanks to John Purnell for the very gracious email.
My horizons are expanded.]