Berry Web Blog -- page 0

How to Designate Single Window for Popup Buffers in Emacs

Posted: 2017-08-20. Modified: 2017-08-20. Tags: howtos, emacs, emacstips.

This blogpost is inspired by the approach found here.

One of the things that used to annoy me about programming in emacs with SLIME mode (common lisp), is that SLIME would frequently choose to open up a popup buffer in one of the windows I was trying to use for some other task. For instance, various actions in SLIME will open up a Completion Buffer, Debugger Pane or an Inspection Buffer. I eventually realized that what I really wanted was to designate a given window where all emacs popups would open by default, so my train of thought in the other windows can remain undisturbed. Below is some Emacs Lisp code that enables this functionality:

(defun berry-choose-window-for-popups ()
  "run with your cursor in the window which you want to use to open up 
   all popups from now on!"
  (interactive)
  (set-window-parameter (selected-window) 'berrydesignated t)
  (berry-setup-popup-display-handler))

(defun berry-setup-popup-display-handler ()
  "adds an entry to display-buffer-alist which allows you to designate a window 
   for emacs popups. If the buffer is currently being displayed in a given 
   window, it will continue to use that window. Otherwise, it will choose your 
   designated window which should have been already set."
  (interactive)
  (add-to-list 'display-buffer-alist
	       `(".*" .
		 ((display-buffer-reuse-window
		   berry-select-window-for-popup
		   display-buffer-in-side-window
		   )
		  .
		  ((reusable-frames     . visible)
		   (side                . bottom)
		   (window-height       . 0.50)))
		 )))

(defun berry-select-window-for-popup (buffer &optional alist)
  "Searches for the a window which the 'berrydesignated parameter set.
    Returns the first such window found. If none is found, returns nil."
  (cl-block berry-select-window-for-popup
    (let* ((winlist (window-list-1 nil nil t))
	   (outindex 0))
      (while (< outindex (length winlist))
	(let ((candidate (elt winlist outindex)))
	  (if (eql t (window-parameter candidate 'berrydesignated))
	      (progn
		(set-window-buffer candidate buffer)
		(cl-return-from berry-select-window-for-popup candidate)))
	  (cl-incf outindex)
	  ))
      nil)))

(defun berry-clear-popup-setting ()
  "clears the 'berrydesignated flag on all windows, thus removing the designation 
   of any given window to host popups. also removes the popup handler registration"
  (interactive)
  (cl-loop for window in (window-list-1 nil nil t) do
	   (set-window-parameter window 'berrydesignated nil))
  (pop display-buffer-alist)
  )

My usual window layout when programming in Emacs looks like the following (note that in emacs a window is more like a frame in most other environments):

+-----------------+
|        | second |
|        | code   |
|primary | window |
|code    |--------|
|window  | REPL & |
|        | POPUPS |
|        |        |
+-----------------+

So what I do after opening all the windows I want is I put my cursor in the "REPL & POPUPS" window and run berry-choose-window-for-popups. The content of my other windows remains undisturbed by IDE functions unless I tell the computer to change buffers in one of those windows.


Effective Windows Setup

Posted: 2016-03-15. Modified: 2016-03-15. Tags: howto, Windows, practical.

1 Introduction

This article discusses some of my tips for effective usage of Microsoft Windows by a privacy and productivity minded individual.

I break down the article into two sections: one aimed at "Standard Users", and one aimed at "Power Users". I define standard users as people who are primarily interested in using a browser, using an email client, working on various types of office documents, using the filesystem, listening to music, and watching movies. I define "Power Users" as people who, in addition to the above use cases, have characteristics such as : interest in automating their interactions with the computer, interest in engaging in software development in various software stacks, interest in using remote servers, interest in using source control.

2 Standard Users

2.1 Privacy and Control

Newer versions of Windows take away an astonishing amount of privacy and control from the user. This is one major factor that is increasingly pushing me towards alternatives such as Linux and BSD. There are some tools that can help on Windows, however.

My favorite on Windows 10 is a tool called "Shutup10" which lets you configure Windows 10 data-leaking behaviours in one spot.

Another favorite Windows software of mine for new Windows versions is "Classic Shell". The start screen of Windows 8 is not something I really like, and the start menu of Windows 10 still leaves a lot to be desired.

For Classic Shell I set the following options:

  1. set start menu theme to "classic"
  2. set start menu button to "classic"
    1. adjust size of start menu button to make it bigger.

For Windows 10, I additionally uninstall Cortana and OneDrive – I have little interest in sharing my local data with Microsoft. Additionally, I noticed that Cortana takes significant system resources.

Follow these directions to uninstall Cortana, (warning, this will break the default Windows 10 start menu).

Follow these directions to uninstall OneDrive.

verdict: use Shutup10 and Classic Shell to restore a windows-7-like experience.

2.2 Internet Tools

2.2.1 Browser

The main tool most people use to interact with the internet is their browser.

Chrome/Chromium is probably the best-supported browser by websites today. Practically minded standard users should probably just use Chrome and call it a day.

I personally primarily use the browser "Pale Moon". "Pale Moon" has several advantages over Chrome, including the fact that various things work better when running sites over the file:// protocol (extensions have stronger permissions to start with), there are a bunch of great firefox extensions which work with Pale Moon, and that the project has independent governance and isn't controlled by one of the mega-corporations. Pale Moon is not perfectly supported by every website, and some websites do User-Agent checking to warn against using Pale Moon (even if Pale Moon works perfectly fine there!).

Another browser that I find very promising is Brave. This browser project supports interesting ideas about the future of web-advertising.

Whichever browser you use, I recommend installing an ad blocker to speed up your browsing experience and remove gaudy ads. I recommend disabling the ad-blocker on sites which you would like to support.

My favorite ad-blocker for Chrome is "Adblock" (not Plus or Pro). This seems to have a simple UI and give effective results. Pale Moon has an adblocker called "Adblock Latitude" available which works quite well in my experience.

verdict: if you want to support an independent browser, use Pale Moon. Otherwise, use Chrome. Install an adblocker on either one.

2.2.2 Email/Organizational Client

It seems most people these days use webmail clients such as Yahoo Mail or Gmail. These work fine.

In the past I have used desktop email software including Microsoft Outlook, EM Client, and Thunderbird. Microsoft Outlook always seemed like a bit of a buggy mess for my use cases (I used Outlook 2010) and did not support integrating multiple inboxes very well. I was very happy with EM Client for a while, but eventually my installation developed some sort of bug where it would constantly de-authorize itself and ask for me to input my license key. This got very old. Additionally, EM Client began to take up too many system resources and be very slow to respond. I stopped using EM client. Thunderbird is a fine email client, but I don't remember being especially impressed with its calendar/notes feature (Sunbird). Thunderbird has largely been abandoned by Mozilla.

I currently use my self written tool berryPIM to manage my todos, finances, contacts, and calendar, and use webmail for my email. I plan to incorporate an email client in to berryPIM in the future.

verdict: if you don't care about privacy or offline access, use webmail, web-calendar, web-finances, and web-todos. If you do, consider using something like berryPIM for managing at least your calendar, contacts and the like.

2.3 Document Creation

I currently recommend LibreOffice 5.1 as the best free and open-source document suite. It is a good set of tools, and while it has weaknesses in areas such as typesetting formulas or easy built-in documentation for scripting, I still think it's quite good and useful.

Apache OpenOffice is a strong alternative to LibreOffice, and is perhaps a bit more stable and traditional than LibreOffice. Microsoft Office is a good tool as well, but is not free (in price or in the hackable sense).

verdict: LibreOffice 5.1 is good for many document editing needs.

2.4 Media

I recommend winamp as a light and fast Windows music player. I have had terrible experiences with iTunes for Windows – it installs background tasks which consistently used large resources on my machine and slowed it down.

VLC is a great tool to play videos and various types of media.

verdict: use winamp for music, VLC for videos.

2.5 Utility

Standard users need to consider a few utility tasks for their computer.

2.5.1 Backup

Backup is important if you don't like losing files. Various versions of Windows provide different backup utilities – one based on simple backup and restore functionality, another based on versioned snapshots similar to Apple Time Machine.

I personally use neither of the above tools, but instead use rsync in conjunction with cygwin (cygwin is discussed in the "Power User" section below). I use the Windows Task Scheduler to schedule a nightly backup of folders which are important to me. I recommend backing up to a remote location, if you have access to a server which you can access remotely. Otherwise, a locally attached hard drive should be ok for most purposes.

If you have cygwin64, here is an rsync command which can be entered into "Task Scheduler" to perform a complete backup of the Desktop folder:

Put the following into the "Program/script" box:

C:\Cygwin64\bin\bash.exe

Put the following into the "Add arguments" box:

-l -c "rsync  -rlt -z --chmod=a=rw,Da+x --delete /cygdrive/c/Users/vancan1ty/Desktop/ /path/to/destination/location/ >> /cygdrive/c/Users/vancan1ty/logs/backup_log.txt 2>&1"

(you must change the source, destination, and log file paths to match your use case).

Here is a screenshot of what it looks like on my computer.

verdict: use rsync+Task Scheduler to perform simple incremental remote backups.

2.5.2 Antivirus

Antivirus is less important for Windows users today than it was in the past. Newer versions of Windows come automatically configured with Windows Defender, which is a reasonable antivirus solution, and many email services do a better job at filtering virus-containing spam. Still, it is possible to get viruses today, especially if you are downloading files from less-than-reputable sources. In addition to outright viruses, you can accidentally download various "undesirable" software as parts of installation packages for other software and the like.

It is ok to disable Windows Defender realtime protection if you feel that you can stay out of trouble for the most part and that it is using too many system resources.

I recommend leaving on Windows Defender's realtime detection feature, and additionally installing the free version of Malwarebytes anti-virus. I recommend scanning your system with Malwarebytes once in a while to see what it finds. Malwarebytes has good reviews and has served me well. This antivirus routine should be fine as long as you are reasonably savvy with phising scams and the like, and are not a high profile target.

verdict: Use Windows Defender (realtime) + Free Malwarebytes (periodically)

3 Power Users

Below are some of my recommended configurations for developer and power-user tools on Windows. These recommendations build on my recommendations in the previous section for standard users.

3.1 General Power Tools

3.1.1 Text Editor

There are numerous excellent text editors for windows, among them Notepad++, GVim, Atom, and Sublime Text.

  1. Emacs

    I am a big fan of Emacs with Vim keybindings. I recommend you check out my .emacs file here to get a feel for some of the optimizations you can do for Emacs in Windows and in general. See this link for some reasons I like emacs over its competitor vim.

    If you choose to use Emacs, one thing you might want to do for Windows is change the default Emacs shell to be cygwin, so that you can use a large proportion of the built-in unix-linked Emacs commands such as "man", "grep", and "make".

    Here some lines which set msys as the default shell with cygwin as a backup:

    (setq shell-file-name "C:/MinGW/msys/1.0/bin/bash")
    (setq explicit-shell-file-name shell-file-name)
    
    (setenv "PATH"
       (concat ":/usr/local/bin:/mingw/bin:/bin;"
       (getenv "PATH")))
    
    (defun cygwin-shell ()
       "Run cygwin bash in shell mode."
       (interactive)
       (let ((shell-file-name "C:/cygwin64/bin/bash")
       (explicit-shell-file-name "C:/cygwin64/bin/bash"))
       (call-interactively 'shell)))
    

    One important tip which gives Emacs a bit more of an IDE "feel" is to use the emacs "speedbar" feature. The speedbar is a frame which opens up to the left of your main document frame, and allows you to navigate through files in a way similar to the file browser in many IDEs and developer tools. You can lock the speedbar to a specific directory or let it follow you as you open up documents.

    verdict use emacs with a smart .emacs file, fall back to notepad++.

3.1.2 Shell

If you are used to using a Unix/Linux shell, you will probably enjoy getting a similar experience on Windows.

Cygwin is the best Unix shell and POSIX compatibility layer for Windows. There are a variety of other alternatives, including msys, msysgit, and msys2. Each of these alternatives is to some degree based on cygwin. Msys is older and among other things only bundles 32 bit utilities (restricting file size in rsync transfers, for example). msysgit is supposed to be an even older version of msys, though apparently with recent git releases they have switched to msys2. Msys2 is close to cygwin, and is the best of the msys iterations, but still doesn't have as many packages as cygwin and in my opinion may try to be a little too smart in transparently converting between windows and unix paths, line-endings, and the like. For a list of differences between Cygwin and Msys2, see the following link.

If you use Cygwin, you may want to set your windows environment variable HOME to "C:\Users\[username]". This way, Cygwin will use your windows home directory as your cygwin home.

Cygwin is great, but it has several pitfalls which I have run into.

  1. The default mintty terminal, while it is nice, does not work with "interactive" windows shell commands (e.g. you can't run a Windows version of python from within mintty cygwin).

    I find this to be particularly annoying. Fortunately, you can run cygwin using the default windows console as well – I recommend adding a shortcut in C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Cygwin to C:\cygwin64\Cygwin.bat so you can run easily launch cygwin using the default windows console.

  2. Binaries built in cygwin gcc will depend on cygwin1.dll.

    If you would like to create standalone, native Windows binaries, follow the directions here: http://www.mingw.org/wiki/FAQ under the header "How do I use MinGW with Cygwin?". Basically, whenever you want to use the native gcc and related tools from within Cygwin, just prepend mingw tools to your cygwin path. Then you can build native builds to your hearts content. Check the first level of dependencies using the "DUMPBIN" utility to confirm that cygwin1.dll is not a dependency.

If you decide to use cygwin instead of msys2 as your primary shell, you need to be aware of the distinction between native Windows binaries and Cygwin binaries. These differences are especially obvious in the areas of filesystem paths, interpretation of newlines in output, and when dealing with symlinks. Cygwin provides some utilities in the "cygutils" package to help deal with these problems – I think the most useful are cygpath (path conversion) and "dos2unix"/"unix2dos" (line-ending conversion).

verdict use cygwin, set up cygwin path to call mingw native tools when necessary.

3.1.3 Git

If you are developer, you are probably familiar with the widely popular version control system Git.

Git has a good native Windows installer from https://git-scm.com/ . Git can also be installed within the cygwin environment using Cygwin's setup.exe.

I do not recommend using the default Git Windows explorer integration features for "Git GUI" and "Git Bash". I recommend using cygwin instead of "Git Bash", and "TortoiseGit" instead of "Git GUI". TortoiseGit is a very nice GUI tool for using Git which integrates with Windows Explorer. It uses graphical icon overlays to visually convey the modification/commit status of files in your git repositories, and also allows you to perform all common git actions from within a fairly-easy-to-use GUI.

  1. TortoiseGit tips

    If you are following my advice from earlier in this guide and are using Cygwin as your primary shell, then I recommend that, during the TortoiseGit installation process, you select "OpenSSH" as your SSH authentication provider. That way, you can reuse your OpenSSH public and private keys in ~/.ssh when making SSH connections to remote servers for Git actions.

    verdict use git with tortoisegit for GUI integration.

3.2 Privacy and Control

Most of what I said for standard users applies here for power users as well. One hack I like in addition is removing the Windows 10 startscreen (use the following directions).

Figure out what your ideological stance is on sending your local files to Microsoft, and adjust the settings in ShutUp10 accordingly. On Windows 7 you don't need to worry too much for the most part. Windows 8 is in-between the two.

verdict be aware of what data leaks out of your computer.

3.3 Internet Tools

In addition to my browser and email/organizational recommendations above, I know of a few more internet-related tools that may be useful to power-users.

3.3.1 File transfer client

FileZilla is a multiplatform GUI file transfer client. I recommend using it for FTP and SFTP transmission of files.

3.3.2 Port probing

I think the NMap GUI ZenMap is a great port GUI probing tool.

3.3.3 Command-line tools

There are a variety of command-line tools built in to windows or from the linux/unix world which are very useful for internet-related tasks. You can get them by using cygwin, for example.

Some of my favorites are:

  • rsync – incremental file syncing
  • ping – see if server responds to ICMP request
  • curl – download files and data.
  • scp – copy files to and from remote server
  • ssh – securely connect to remote servers

verdict learn the unix-style command line!

3.4 Document Creation

3.4.1 Standard Office Docs

What I said for the Standard User scenario still applies here.

Something power-users may want to look into with libreoffice is saving their documents in ".fodt/.fods" format. The "f" prefix formats are saved as a plain xml file, rather than being saved as a gzipped folder of xml files (as is the usual method). This allows you to much more effectively put libreoffice documents in version control, as well as occasionally work directly with the text of documents.

One thing I think is characteristic about power users is our desire to automate repetitive or difficult tasks. A saying I like, for example, is "If you can't script it, I ain't interested".

In that spirit, I think most power-users will want to dabble their feet into scripting their office suite after awhile. Microsoft Office has a fairly simple-to-use API accessible from VBA for doing many common document-related tasks. It also has an "Object Browser" built in which facilitates discovering and using the document API, as well as corresponding documentation online.

LibreOffice/OpenOffice are not at first glance quite as accessible to would-be scripters. Both embed a language similar to VBA called starbasic, and the StarBasic syntax and commands are well documented through the built-in help feature. However, the actual API to interact with documents is provided through a cross-language abstraction called UNO, and this can be somewhat confusing and is not documented in an accessible way to newcomers or non-programmers.

UNO involves some confusing terms such as "Included Services" and "Exported Interfaces", and if you don't understand it it can be difficult to find what you can actually do to an object or to discover the functionality you are looking for. Once you get the hang of it its not too bad, however. I recommend the following resources and the following tool to get you up to speed with LibreOffice scripting:

  1. The excellent book "OpenOffice.org Macros Explained" by Andrew Pitonyak, currently available for free at his site http://www.pitonyak.org/oo.php.
  2. The LibreOffice IDL API (for LibreOffice) or the OpenOffice IDL API (for OpenOffice).
  3. The tool X-Ray by Bernard Marcelly provides a way to explore available methods and properties within LibreOffice/OpenOffice. This functionality is similar to that provided by the object browser in Microsoft Office

Once you develop a familiarity with BASIC and the common UNO interfaces, you should be able to script your documents to your hearts content!

verdict use fodt and learn Libreoffice scripting

3.4.2 Closer Control

You can achieve tighter control and sometimes better output by using TeX or LaTeX to typeset your documents. I am proficient in TeX, and find it useful for documents where I want tight control of the layout or for document which have lots of formulas.

I'll put more in this space sometime but for now I will recommend:

  1. The TeXBook by Donald Knuth to gain an understanding of how TeX works (this book is fairly verbose, but it does get the job done).
  2. The Tex Reference by David Bausum for a hyperlinked reference to standard TeX control sequences.
  3. TexRefCard by J.H. Silverman for quick reference to common control sequences and functionality.
  4. OPmac by Petr Olsak to get some of the benefits of LaTeX, and the easy ability to switch between some common fonts, without having to go all in on the vast and confusing LaTeX.

verdict learn TeX if you are really pedantic about tiny details in your documents.

3.5 Media

You might enjoy the cross-browser script "downloadyoutube" which allows to download youtube videos as mp4 files. Another browser extension which I have seen recommended for downloading videos is downloadhelper.

If you are interested in editing images and icons, you should check out the FOSS tools Gimp and Inkscape. I do not know much about editing videos or animation, so I won't make recommendations in those areas.

verdict get downloadyoutube, learn image editing with Gimp

3.6 Utility

I detailed a simple Rsync-based backup system in the Standard User section, that is adequate for my needs for now. If you need something more complex, you can probably rig it up with rsync – see for example "Time Machine for Every Unix out there" for how to store versioned history of your files. Other recommendations from standard section stay in effect.

verdict use AV, learn Rsync

3.7 Programming Language Specific

3.7.1 Java

Java is a great programming language and runtime, contrary to what many haters like to state.

Maybe Java's greatest weakness is its comparatively high memory usage and the presence of some unpredictability in latency due to garbage collection pauses.

One of the great things about Java is the wonderful tooling that exists for it. I recommend IntelliJ Idea as my favorite Java IDE, and maven as my favorite java build tool.

verdict use IntelliJ IDEA

3.7.2 C/C++

Mingw provides a good C/C++ development environment on Windows, similar to what you can get on Linux/Unix. Specifically, it seems that today the Mingw-w64 fork of Mingw provides the best support for programming with GCC using the native Windows APIs. See my note above in the Shell section on how to run native Windows builds from Cygwin.

Learn how to use GDB and valgrind.

Emacs provides a good environment to develop C/C++ code.

verdict learn how to call native Mingw tools from Cygwin

3.7.3 Python

I like python a lot, and find it a very productive environment for interactive computing and experiments. Python's practical power is in large part due to its excellent ecosystem of libraries and tools.

A very good way to install Python on Windows is through the "Anaconda" package from Continuum Analytics.

Below are some useful Python packages for data analysis and math…

  • Scipy+Ecosystem SciPy and its related tools (Matplotlib, NumPy, Pandas, IPython,Scikit-Learn,…) really do form an amazing toolset for data analysis and mathematical problems. This is definitely my preferred toolset for these problems currently – I have tried some alternatives but I prefer the python libraries and toolset.
  • SymPy is a decent toolit for symbolic math. It is slow compared to Mathematica, and a bit confusing to use in my opinion.

I have yet to try out the myriad of deep learning libraries and the such which are supported through python. This is a hot field and I plan to do this if I ever get access to suitable GPU hardware or a fast CPU.

Emacs provides a good environment to develop Python code.

verdict download Anaconda, start playing with awesome libraries.

4 Conclusion

I hope you enjoyed my opinions and tips on using your Windows PC effectively. Please email me at currellberry at gmail if you find any errors or have any comments. Thanks!


Criteria for Software Freedom

Posted: 2016-03-15. Modified: 2017-08-01. Tags: Opinion, philosophy, programming.

“Free software” means software that respects users' freedom and community. Roughly, it means that the users have the freedom to run, copy, distribute, study, change and improve the software. Thus, “free software” is a matter of liberty, not price

Free Software Foundation

1 Introduction

The FSF's definition of free software, written above, is a useful broad principle which relates to how much control a human user has over a given computer program. In this article, I discuss some specific criteria which we can use to assess "Software Freedom".

I think the typical definition which people think of when they think "free software" is simply whether the software is open-source, or perhaps even if the software costs nothing to use. However, while these users are correct for some definition of "free software", I, like the FSF, think it can be useful to load the term "Free Software" with more implications in order to better capture the nature of the relationship of a human user to a piece of software. What is important is not really what the legal status of a piece of code is, but rather the practical level of control which a user has over that code. In this era of ever-increasing computer technology, I think it becomes more and more important that humans can control the computations which they use, and not the other way around.

I propose three criteria which I think are especially relevant today in our age of cloud services and increasingly complex software. My goal is to assess the level of control of the human user over a piece of software which she uses. The first criterion is simply the basic definition of open-source, the second criterion is mostly implied by Stallman's definition of "Free Software", and the second definition is not so directly implied. My criteria are:

  • Availability of source-code.
  • Control over deployment
  • Accessibility to understanding.

2 Availability of source-code

This criterion is probably what most people when they think of "free software" or "open-source software". Whether a piece of software is run remotely or locally, having access to the source-code can provide a user great insight into what a piece of software is actually doing. Availability of source-code often corresponds to the user having greater "control over deployment". This criterion is the one which is addressed by the various open-source software licenses.

3 Control over Deployment

Control over deployment is a basic precondition to control over computation. If the computer user cannot control when an application starts, stops, and is modified, then the user cannot say that he controls the computation which is being done. Many of today's cloud web-apps run afoul of this criterion – not only are most of them closed-source and closed-data, but they can and will be discontinued when the provider decides without recourse for the user. Cloud services are discontinued all the time – see this page or this page for some examples.

I define three levels of "control over deployment":

  1. The user does not run the software, and does not have access to enough information about the infrastructure, source, and data of the project to "fork" the deployment of the software to run it herself.
  2. The user does not actually run the software on her computer, but if the service is ever discontinued or negatively modified the user has enough information to "fork" the deployment of the software and run it herself.
  3. The user controls execution of the software herself.

Level 0 is the default level of control over most web-apps. Google Translate is an example of a service I classify at this level. If Translate is ever discontinued, I have no ability to bring it back. I do not have access to the source code for google translate, and cannot know much about the infrastructure or methodology used to run it. Google Translate is a useful service, but it would be so much more useful to me, a hacker, if I could spin up my own version of "Translate" with my own changes.

Level 1 is the level of control afforded by most hosted open-source software installations. Sage Cloud is an example of a service in this category. While Sage Cloud is an online service for which I do not directly control deployment, Sage itself is open-source software, and I can easily spin up my own "Sage" server with most features intact. Level 1 has many benefits over Level 0 for interested computer-users, not least among them that the user can study the implementation of the service to potentially improve it and change it to match his own purposes.

Level 2 is the strongest level of control over computation. In Level 2, the user controls the computer on which the software runs, and can choose how it is used and when it is upgraded. Level 2 is the level corresponding to traditional desktop software. Even closed-source software provides a fair amount of control to users when it is run locally – the user has a perpetual ability to run the software, and the service cannot be removed without recourse. Additionally, the user can potentially reverse engineer and/or add extensions to even closed-source software as long as he has access to the binary.

Level 2 is obviously stronger than Level 1, but I think the Level 1 is still an important step over the default zero control level of typical cloud services.

4 Accessibility to Understanding

Accessibility to understanding is another criterion which has important implications for the practical level of control of humans over their computers.

Consider a large piece of computer software whose source is distributed in machine code, without comments, under the GPL. Technically, it is open source. If you run a disassembler on it you can get assembly code, and change it to your liking. While the software may be open-source, it will likely take you a very long time to figure out how the software works and modify it to match your own goals. Therefore, your practical level of control over the software is much smaller than it could be if you had access to the source code in a high-level language. Here we see that it is not merely the legal status of the source of a piece of software which determines your control over it, but also the technical status of that source.

One side-effect of the "Accessibility to Understanding" principle is that it sometimes indicates that, indeed, "worse" can be "better" for humans. If you are confronted with a 1-million line program which has marginally better performance at some problem than a 1000 line program, if you are like me you will probably opt to use and hack on the 1000 line program.

5 Conclusion

In this article, I discussed three criteria which I think are useful for assessing how much control a user has over a piece of software. The first condition is plainly evident to most people, but I think the other two criteria are less talked about and used.


When to use TeX vs Org-Mode vs OpenOffice

Posted: 2016-02-09. Modified: 2016-02-09. Tags: Opinion, computer_usage.

There are a number of tools out there which allow you to compose documents. Three of my favorites are TeX, Emacs org-mode, and OpenOffice. Each of these tools is open-source and allows the user to script and modify their experiences.

Below are some factors which I think are helpful to consider when choosing between these document-preparation tools.

Use TeX when:

  1. You want output with very high quality appearance.
  2. You want to take advantage of TeX's powerful layout algorithms and routines.
  3. You want to typeset a bunch of mathematics.
  4. You want the document to be version-controlled using Git or similar SCM system.

Use Emacs org-mode when:

  1. Content is king, and you don't at this stage want custom layout.
  2. You don't need access to the underlying layout engine.
  3. You want to enter content, including mathematics, in a distraction-free and straightforward way.
  4. You want easy export to LaTeX, PDF, and HTML.
  5. You want the document to be version-controlled within Git or similar SCM system.

Use OpenOffice/LibreOffice when:

  1. Ease of composition is more important than highly polished end-product.
  2. You are content with fairly standard and simple layout conventions – and don't require pixel-perfect control or algorithmic layout optimization.
  3. Mathematical typesetting is not that important to the document.
  4. You need to edit the document in conjunction with other users who are not technical and do not know TeX.
  5. You want integration with OpenOffice Calc (spreadsheet).
  6. The document does not need to be in version control.

Fundamental Principles in Writing Computer Code

Posted: 2016-01-25. Modified: 2016-01-25. Tags: programming, Opinion, philosophy.

1 Introduction

There are a huge variety of programming languages out there, an infinite number of concepts, and numerous ways to think about any given problem. How can one logically discriminate among these choices to make wise programming decisions?

First off, there exist a number of software-design camps, each with their own design methodoligies and folk-loreish guidelines.

Some people claim that you should thoroughly document your code with comments, others claim that you should use comments sparsely lest they become out of date and misdirect future readers.

Most people think it's important that programmers "write understandable code." But how does one define what is understandable and what is not? Is it better for code to be concise and dense (ala APL) or verbose and wordy (ala COBOL)? Some people argue that verbose code is easier to understand, as it can sometimes be "self-documenting", while others claim that dense code is easier to read, as it is shorter.

Object Oriented designers love applying acronyms such as SOLID and GRASP to software. Functional designers love applying principles and techniques such as immutability, declarativity, currying, and higher-order functions.

Every programmer has a preference for a certain programming language – probably the one which she has spent the most time with. People get into passionate debates comparing languages and software ecosystems.

Almost all programmers seem to have internalized the concept of "code smell" – we say things like "that code doesn't seem quite right" and "in Java, it's best practice to implement that feature differently".

2 What are the Objective Criteria?

All of the above statements, standing on their own, are subjective in nature. Is there a provable basis for any opinion with regards to software design?

My answer is that there are a number of objective mathematical criteria which affect software design, but that actually applying these criteria to real world problems involves making subjective judgements, just as everything in the real world involves making subjective judgements.

Below are some core objective principles which can be used to assess the runtime quality of software relative to criteria. The runtime principles are commonly discussed, I do not think my sourcecode principle is commonly expressed, in the form I express it at least.

3 Runtime Principles

3.1 Correctness

This principle is, essentially tautologically, the most important principle w.r.t software implementation.

Given a set of goals G, I define correctness for an algorithm A as whether or not A successfully produces the right output and invokes the correct side effects to satisfy G. There can be subjectivity in creating and assessing the goals. Undecidability is a potential choking point for many algorithms, and must be handled in the definition of G.

3.2 Runtime Guarantees

My definition of this principle is different than my definition of correctness, in that, rather than assessing merely whether the correct output/side effects are produced within standard execution of the code, we are also assessing guarantees and/or probabilities of runtime "performance" in the areas of reliability, space, and time usage.

Examples of features which I would categorize under this feature include uptime probability guarantees, fault-tolerance, bounded response time guarantees, and bounded space usage guarantees.

I do not include "runtime computational complexity" (asymptotic measures of how the performance of an algorithm changes as the size of its input changes) in my definition of "Runtime Guarantees".

3.3 Runtime Complexity

"Complexity" in various forms is a core topic of Computer-Science curricula, and is extensively discussed and researched. In computer science, we typically consider runtime time complexity and space complexity as primary criteria of the quality of an algorithm. One can also consider concepts such as runtime "energy complexity" as a loose concept defining bounds on likely energy usage.

4 Sourcecode Principles

Each of the above principles has been extensively discussed in academic and engineering literature. However, notice that NONE of the principles above say anything about how we should organize or write our code. Below I put forward a principle which I think is the core principle affecting code organization and writing.

4.1 Occam's Razor – Minimum Description Length

4.1.1 Introduction

Occam's Razor has time and time again shown its power in choosing models to explain phenomena. It can be shown to be naturally emergent in a wide range of fields including Bayesian probability, natural philosophy, and Theology.

Occam's Razor is simply the statement that "the simplest explanation is most likely to be the best."

4.1.2 Justification

The most convincing mathematical argument for Occam's Razor which I have seen comes an analysis (link to Chapter 28 of "Information Theory, Inference and Learning Algorithms" by David Mackaye) in Bayesian probability which concludes that "simpler" explanations in a given explanation-space have higher posterior probability after any given observation.

4.1.3 Bridge to Software

But how does Occam's Razor apply to software development? After all, its commonly stated purpose is in model selection.

I think the bridge from Occam's Razor for model selection to Occam's Razor for code organization is recognizing that choosing among code organization options is a form of model selection.

The phenomena we are trying to explain (match) are the runtime requirements of the software system. The models are the different code layouts and structures which we can choose among.

Software development is about solving problems with computers. Therefore, we can view the program as a model estimating the desired solution to the underlying problem. If two models (programs) perform roughly the same function, then we can choose between them by the one which more "sharply" models the problem, e.g. the one which is shorter by the principle of Occam's razor.

4.1.4 Application

If you agree that Occam's Razor is a reasonable principle to direct code composition, and even perhaps that it encompasses such commonly uses principles such as DRY (don't repeat yourself) and YAGNI (you ain't gonna need it), then the next question is: how can we apply Occam's Razor to make code organization choices?

Applying the principle, of course, is where we run into some trouble and must deal with subjectivity. In order to apply Occam's Razor to software, I propose that we look at Occam's Razor from a slightly different perspective which is often used in machine learning and statistics. Let's make more exact what we mean by "simpler" and instead discuss "minimum description length".

To compare two pieces of code with the same functionality, we can usually say that the code which has a smaller minimum description length more sharply models the desired feature set.

How shall we compare description length. Shall we compare lines of code? Bytes? Number of tokens? Function points?

I think each of the above approaches can be useful, but for most applications number of bytes or number of lines should be relatively interchangeable, when used together with common-sense.

Another alternative, which may be more useful to compare programs written in different languages or with different style conventions, is to compress the text of the program and compare the compressed size. This gets us closer to the "minimum description length" of the program and helps elminate differences caused by whitespace, descriptive variable names, and the like.

5 Conclusion

In this paper, we discussed a number of principles for writing software which are objective in their core nature and are based in mathematics. We discussed the standard principles which can be used to assess a program's runtime characterestics. In addition to those, however, we also discussed how the "Minimum Description Length Principle" can be used to make choices about code organization and design. Not only does the "Minimum Description Length Principle" encompass other commonly used principles such as "DRY" and "YAGNI", but it also provides a general framework to assess the "sharpness" of a codebase's match to a given problem. For each of the principles we discussed, we touched on the various difficulties in applying it to real-world problems.

While the principles discussed in this paper do not change the subjective nature of software development, they do correspond to core features of a software program that can be measured objectively or pseudo-objectively, and that can be strongly supported by simple arguments in mathematics and philosophy.


Prognostication vs Software Development: Subjectivity and Complexity

Posted: 2016-01-25. Modified: 2016-01-25. Tags: Opinion, programming, philosophy.

"Crede, ut intelligas" – "Believe that you may understand"

– Saint Augustine, Sermon 43.7

As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.

– Albert Einstein, Geometry and Experience

1 Uncertainty in Life

Underlying every experience in life is uncertainty. It is in general impossible to obtain complete confidence about anything.

When a human walks through a room, she can instantly filter out irrelevant details. The reason that she can filter out irrelevant details is that we as humans subconsciously accept certain truths and foundational beliefs. Without making these operational assumptions, we would have to consider an untold number of possibilities at any given point in time and we would not be able to function in the real world.

Examples of details which we safely disgard include:

  1. The ceiling is not going to fall on me, I do not need to analyze its structural integrity.
  2. The pattern of light on the wall is still, therefore it is unlikely that it is a threat to me but rather is just light shining through the window.
  3. The laws of gravity, force, and momentum will continue to stay in effect.

Subconsciously accepting these beliefs and many others about how our world functions allows us to focus on important threats and goals, such as

  1. My little sister just stuck out her leg to trip me.

While it is easy for us to filter out extraneous details and sensory inputs as we go about our daily lives, this is by NO MEANS easy for automated systems. Computers, unless they are programmed quite cleverly, are prone to get bogged down in the "combinatorial explosion" of possibilities which real-world inputs and problems tend to create. Computers' difficulty with real-world problems has prompted the creation of a saying known as "Moravec's Paradox":

"it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility" – Hans Moravec, 1988, Mind Children

Anything which is done following a rigid set of rules can often be automated comparatively easily, while tasks, however simple, which involve subjective perceptions can often strongly resist automation.

2 Uncertainty/Subjectivity in Software Engineering

Most human occupations make use of our ability to navigate ambiguity, and software "engineering" is no exception.

Modern software development is essentially climbing through the apparatus of an immensely complex logical system and making appropriate modifications to produce a desired result. At every step, ambiguity and intuition rears its head. We face ambiguity in our objectives, subjectivity at every implementation step toward meeting those objectives, and immense ambiguity in every interpretation of human communication.

I think that much of "software engineering" is quite subjective. In the design phase of a project, people with different backgrounds and skillsets can often approach the problem with vastly differing methodologies and architectures, but no one methodology can be proven to be "correct". Programmers can often achieve similar results using any several of a variety of programming paradigms, languages, and planning approaches. Whether it is best to design in a functional style with Immutable and Declarative design principles, in Object-Oriented style with SOLID design principles, or a procedural style with close affinity to the hardware, depends on the problem at hand and the prior experience of the programmers. The goals for a project may include hard boundaries such as reliability and performance guarantees, and perhaps runtime-complexity bounds for the behavior of the program, but the organization of the code which gets us to the end-result is in large part a matter of taste and discernment. It all comes down to achieving the desired external behavior in an acceptable amount of time. Regardless of the specific design choices the designer makes, what is critical is that the designer is able to intuitively grasp the problem and the interacting actors which affect implementing the problem, and can swiftly navigate the logic maze towards a workable solution. In other words, a software designer must be able to intuitively navigate complexity as well as subjectivity in order to produce a successful software design.

3 Complexity

If a planner cannot effectively grasp the interacting pieces going into a planned software system, but instead is overwhelmed by the software's complexity, the planner's designs and estimates are likely to be quite poor.

The issue of managing complexity is a bridge from the skillset of software development to the skillset of many other fields, and appears to be a core human mental skill. Just as it is difficult to deliver quality software in a system for which you do not have an effective mental model, it is difficult for people to deliver any reasonable insight on real-world systems for which they do not have an effective mental model.

The classic case of a system which is too complex for people to gain reasonable insight on in general is… any system for which a person is trying to predict a future state.

It seems that once a system reaches a certain minimum complexity level, its specific behavior becomes completely unpredictable over time without actually observing the system to see what will happen. A New Kind of Science by Steven Wolfram popularizes this idea as the "Principle of Computational Equivalence" –

"Almost all processes that are not obviously simple can be viewed as computations of equivalent sophistication". – (A New Kind of Science, page 717).

Wolfram supports his Principle with analysis of hundreds of systems and processes from a diverse set of domains, and claims important implications for the Principle. The most obvious implication is that, once a system has reached a certain level of complexity, trying to predict its future by using computational shortcuts, formulas, or intution, will very often fail.

I think Wolfram's Principle of Computational Equivalence closely matches our experience with trying to predict the future and extrapolate the past within human society. Prognosticators have long claimed to be able to foresee future events, but they have a poor track record indeed. Ask any economist what the state of the economy will be, any MBA what company's stock will be high, or any professor of world affairs what the major flashpoints for world conflict will be, in only a few short years, and while they may very well give you their opinion, and wax eloquent about technical details in their field and complex models which they may consult, they will also almost certainly be wrong. Just as most economists incorrectly predicted financial stability through the period of 2007/2008 ( http://knowledge.wharton.upenn.edu/article/why-economists-failed-to-predict-the-financial-crisis/), most mutual-fund managers will find that they cannot beat the odds of the market for long, but will instead regress to the mean in the long run (for proof, compare the long-term returns of the largest mutual funds relative to plain Exchange Traded Funds).

The difficulties which arise when attempting to predict the future are very similar to the previously mentioned difficulties with complexity which often arise in software development. Prognosticators attempt to predict the outcome of an immensely complex system (the world) with myriad free variables. Because the world is more complex than any person can make an effective mental model of (besides using various heuristics which may or may not appear correct in hindsight), the track record of people trying to predict what will happen in the future is extremely poor.

Similarly, in software development we work with very complex systems. In some respects, software development could be considered less complex than international politics, but in other respects, it could be considered more complex. Modern software development involves coordinating the execution of billions of networked logic units over time. While you may be able to largely understand the physics for the operation of a single transistor within a computer (complete with various quantum and traditional analog phenomena) you can certainly not completely understand or predict the operation of a machine with billions of interacting transistors (which is what today's computers are). Rather, you take certain assertions about your machine on faith, and assign them a very high degree of confidence. When something goes wrong with your computer, then, your limited knowledge about how your computer works allows you to assign an informative prior over the sample space of possible issues with your computer, and quickly zoom in to the likely cause of the issue. When a person is developing software, if he does not have good enough mental model of the processes he is trying to control, he will almost certainly find that the project does not go as expects.

4 Conclusion

In this paper, we discussed subjectivity and complexity in the context of software development. We discussed how humans' intuitive ability to operate in the presence of subjectivity gives us a key advantage over mechanical automata such as computers. We also discussed how real-world phenomena often display a great deal of apparent complexity as well as subjectivity. Wolfram's Principle of Computational Equivalence argues that there is often no shortcut to predict the future state of a system, as you must run a computation of equivalent complexity to the system itself in order to predict the future state of the system. Just as it proves difficult to predict the future state of the world, because we have no effective means to simulate the operation of the world, it is difficult to to predict the success or failure of a project such as a software project if the project leader does not have an effective grasp on all the important variables, considerations, and plans, so that he can mentally compute all the possible outcomes.

As long as programmers make appropriate simplifying assumptions and form an effective mental model of the interacting pieces of the program, software development can proceed quickly and efficiently. The moment, however, that a programmer loses an effective grasp of how the system he is working on works, the programmer is facing the same scenario as the prognosticator – how can you predict how something will pan out when you do not have an appropriate understanding of the forces at play?


Why I prefer emacs+evil to vim

Posted: 2015-10-22. Modified: 2015-12-21. Tags: LISP, Opinion.
  1. Variable width font support.

    I studied my own reading speed on a variety of documents, and found that I read variable width text about 15% faster than fixed width text. I don't mind writing code in fixed-width fonts, but if I am going to use one text editor for everything, then I very much appreciate having variable-width support.

  2. org-mode > markdown

    Org-mode allows you to write structured content within Emacs, and supports the writer with a variety of useful features and tools. Besides the rich editing, export, and extension possibilities offered by emacs org-mode itself, I find that the org format is superior to markdown for my purposes. Two primary reasons for this are that org provides syntax (such as drawers) for defining all sorts of metadata about your text, and also that org is designed in such a way that it is basically equally usable as variable-width text and fixed-width text. In particular I dislike the extent to which markdown relies on fixed-width text for its display features.

  3. evil >= vim.

    Emacs "Evil" mode pretty much provides a superset of commonly used vim functionality. Evil supports all the commonly used vim editing commands, which allows you to take advantage of vim's ergonomic design as you edit text. Evil actually improves on some vim features – for example, search and replace shows replacements being entered as you type them. Evil also provides access to the full power of Emacs just one M-x away – you get the ergonomics of vim with the power of emacs when you want it.

  4. superior extensibility (> emacs-lisp vimscript)

    Especially for a lisp fan such as myself, Emacs Lisp seems a superior language to Vimscript. Emacs Lisp is kind of like a baby version of common lisp, and supports a rich number of features on its own and with the addition of third-party libraries.

    However the real advantage of Emacs in extensibility is the fact that the majority of emacs is actually written in Emacs Lisp. Emacs' Github mirror indicates that the ratio of elisp to C in the project is ~4:1. I believe most of the C stuff is quite low-level, and is related to multiplatform support, core rendering, and the like. On the other hand Vim's Github repo indicates that Vim's vimscript C ratio is ~0.8:1. Since the vast majority of emacs is written in emacs-lisp, in the emacs environment you can very easily dive in to functionality from within emacs to understand and/or modify how things work.

  5. "self documenting" w/ C-h f, C-h k

    In the Emacs Manual it is stated that Emacs is "extensible, customizable, self-documenting real-time display editor". One feature I really like about emacs is the "self-documenting" component of that description. Emacs makes it very easy to look up the docstring of a given function or command, easy to determine what a keyboard shortcut does, easy to determine what shortcuts are available, easy to determine what functionality is present from various modes, and more. In short, emacs makes it possible to spend a great deal of time within emacs without having to go online to look up how to use a given function or tool.


Common Lisp Standard Draft

Posted: 2015-10-11. Modified: 2017-04-26. Tags: LISP, programming.

UPDATE 2017-04-26: Updated this page to link to my new and improved version of the ANSI CL standard draft which now includes a pdf sidebar outline.

Below is a link to a build of the publicly available ANSI CL standard draft, which has been somewhat modified to include a pdf sidebar outline. The sidebar makes it much easier to navigate this 1200 page plus document!

ANSI CL standard draft.

If you are curious to see what modifications I had to make to the tex sources to obtain the above pdf, please refer to the gitlab repository hosted here.

1 Backstory

The Official Common Lisp standard ANSI X3.226-1994 (now referred to as ANSI INCITS 226-1994) is available from ANSI for a cost of $60. However, besides the fact that this document is expensive, it is also known that this document is actually a low-quality scan of the original.

Many people apparently use the Common Lisp Hyperspec, but I personally find this document highly confusing and difficult to learn from in any meaningful way. It is from the early days of "hypertext" and employs so many links as to be basically unreadable in my opinion.

An alternative to the above two choices for documentation on the Common Lisp language is the final draft to the ANSI standard. According to Franz Inc the final draft of the ANSI standard differs from the official standard only in "formatting and boilerplate," and the final modifications are said to have "no technical implications". The final draft is licensed for "free use, copying, distribution". Tex sources of the individual chapters of the Common Lisp standard are freely available from CMU. Postscript copies of the individual chapters of the Common Lisp standard are freely available from the late Erik Naggum's website.

2 Archived Downloads and Notes

Note: as of 2017-04-26, I highly recommend the version above. These old revisions are here for historical purposes.

I have prepared two PDFs of the final Common Lisp draft standard for download.

I believe they can be treated as an authoritative resource on Common Lisp for the general user, and as a good alternative to the Common Lisp Hyperspec. Below I explain some of the alternative documentation sources for Common Lisp, and how it came to be that I am hosting a link to this document.

Draft Version A was created by concatenating Erik Naggum's ps files into a single pdf. It has two pages side-by-side at a time, and rather poor fonts in my opinion.

Draft Version B was created by recompiling the original Tex sources. The font quality is much higher, and the resulting document scrolls properly on-screen. Steps to reproduce my work are listed here, so that you can verify that I have not changed anything in the standard.

To access the sources I used to build the draft version B or to see the script I used to do the pdf generation, please see my repository clstandard-build.


Declarative vs Programmatic UI

Posted: 2015-09-15. Modified: 2016-01-25. Tags: GUI, Opinion, programming.

There are two common ways of going about defining a graphical user interface.

  1. Declarative UI.
    1. You adapt a markup language like HTML or XML, possibly in combination with a layout language like CSS, to define the identity and basic placement of widgets and controls.
    2. You traverse your declared UI using a real programming language like Javascript in order to add functionality and advanced UI features.
    3. This approach has been adopted by frameworks such as AngularJS, and QT+QML, and is the standard approach for Android UI development.
  2. Programmatic UI.
    1. You create and lay-out the elements of the UI directly in a programming language. While you may still achieve separation of concerns by delegating UI creation and layout to a dedicated "View" object, all actions necessary to construct the UI are programmatically guided instead of declaratively specified.
    2. All older UIs used this approach, and it is commonly used today to construct UIs for a vast variety of desktop frameworks and can also be used to manipulate the web DOM.
    3. Swing, SWT, GTK, QT, is an option for Android.

Is one approach better than the other? What are the upsides and downsides of each approach?

Declarative UIs are probably at a higher level of abstraction. At the cost of a potential learning curve and flexibility, you specify what UI you want and leave it to the computer to figure out what actions to take to achieve that UI. This is in some ways similar to an oft-mentioned divide between functional programming languages and imperative languages – in a functional language, you more often define what you want to be done without going in to the details of how to do it. For a trivial, and common, example, the list manipulation functions "map," "filter," and "reduce" describe what an operation to perform without going into details of how to use accumulator variables, index variables, or loop bounds.

Abstractions are leaky, however. If a declarative UI framework is not closely designed after what you, the programmer, want to do, then you may encounter significant "impedance mismatch" and/or learning curve in making it do what you need. For example, imagine performing an arbitrary action upon a click in Swing vs AngularJS. Swing is very programmatic – you can simply register a click handler and do whatever you want to any other item on your page. In AngularJS, however, while you technically can register a click handler to do this sort of thing, it is not considerd "good practice" and the standard approach is to ensure that the view is bound to an underlying data model using "directives". In order to accomplish an arbitrary action using directives, you may have to think outside the box of simply accomplishing the action by also considering AngularJS's databinding semantics as part of your problem. There is a significant learning-curve for learning AngularJS two-way binding properly, and it is not appropriate for all web applications.

AngularJS claims on its website that it is optimized for CRUD (Create, Read, Update, Delete) websites, and even has a disclaimer that it may not work well for sites with heavily-custom DOM manipulation needs. For example, I suspect Google Docs would be very difficult to implement in idiomatic AngularJS.

In conclusion, there is no abstraction that meets all needs. While AngularJS may be great for CRUD apps (once you take significant time to understand it), it is not good for DOM-manipulation-heavy apps which do not lend themselves well to simple databinding. Data-binding declarative user interface frameworks tend to have advantages for applications which are close to the intended purpose of the framework, but if you need full customization and/or performance, you may need to specifiy not only "what" the computer needs to do but also "how" the computer should do it by using a traditional procedural user interface API.


The Case for LISP

Posted: 2015-09-15. Modified: 2016-01-25. Tags: LISP, Opinion, programming.
  1. There are multiple ways to think about problems. *
  2. Given a real-world or programming problem, often if you find the right abstraction to model it you can make the problem vastly simpler and/or make your software vastly higher quality.
  3. Therefore, solving a problem well should involve trying to find the best abstraction possible.
  4. Programming languages influence your thought processes to match what they offer.
    1. "if you see enough nails everything starts to look like a hammer".
    2. See: Sapir Whorf hypothesis and Paul Graham: Succinctness is Power.
  5. Lisp offers the flexibility to adapt to ANY paradigm, imposes the least constraints on your thinking, and therefore is likely often the best language to use to approach a problem. **

\* I hold the idea that there are multiple ways to think about problems to be fairly obviously true. But in case you don't agree yet, here is some justification. An example of a problem which can be profitably approached from multiple perspectives is "dynamic programming". I can either approach a dynamic programming problem using a top-down recursive viewpoint, or look at it from the bottom up as filling out values in a table. I can reason about DP as either constructing a DAG of subproblems or as filling out a table of precomputed values. There are MULTIPLE WAYS to think about the problem

\** One useful corollary to the idea that different paradigms are good for different applications is the common wisdom about optimal applications for functional and object-oriented programming. Functional languages are often said to be good for problems which resemble computation, with a defined input and output (like compilers). OOP languages are said to be good for problems where you model the natural world as objects. Lisp, of course, lets you choose between object-oriented and functional programming, and lets you implement entire new language structures within the language itself to extend the language to support most any paradigm you can think of.