Quote:
Originally Posted by jmakin
[[ ${failed} -eq 0 ]] && do something
You can actually usually just do
$failed && something
because 0 = true in bash and any other number = false.
Quote:
Originally Posted by jmakin
I used to do error checking everywhere in my scripts with ifs/else's and checking $? a lot until my mentor taught me a much better way.
-e is a really good idea for most scripts and I'd actually usually prefer it was the default that you'd need to override. -x is a good one for debugging because it shows all the commands you're running and their output. I wish most scripting languages had this feature.
Something to keep in mind with -e is that a lot of programs are not very good about the "non-zero exit code means error", or, it may be ok for them to fail. You can handle these by running them like
(mycommand || true)
which runs them in a subshell and will always return a "true" result for error checking.
For example a lot of my build scripts start by cloning the a repo. If you run them locally, instead of on the build system, it's common for you to already have the repo checked out. Instead of checking for that first, or deleting the repo and re-downloading, I just let "git clone" silently fail.
And btw subshells are really your friend in bash, because anything you put in them acts like it was run in a separate script - they won't pollute the global namespace. Like...
Code:
(
cd /foo
git clone myrepo
make
make install
)
The parens make it run in a sub-shell so the change of directory is local to the part in parens and won't affect the top level script. This is especially good for stuff that might fail, because it won't leave you stranded in the wrong directory.