Day one of Gig City Elixir has wrapped up and I wanted to capture my thoughts on each talk while they were still fresh in my mind.

Why Functional Programming Matters

By John Hughes

This was a very interesting and engaging presentation by a well seasoned speaker. John entertained us all with a walk through eighty plus years of functional programming, from representing booleans, conditionals and integers as functions all the way through modern efforts to describe chip layouts with functions.

Representing booleans as functions has always struck me as a neat trick, but not an approach that you would actually implement. I was surprised to learn, then, that early versions of the Glasgow Haskell Compiler represented many data structures as functions and only abandoned that approach when CPU branch prediction made it less performant. (John didn’t expand on that point, and I don’t know enough about branch prediction to understand exactly what the issue would be.)

It wasn’t until the 1960s that we were actually able to run functional programs on actual computers, thanks to John McCarthy discovery of LISP. John Hughes then mentioned two other researchers and languages I hadn’t heard of, Peter Landin (with ISWIM) and John Backus (with FP). Landin was looking for a way to avoid/prevent the proliferation of more languages and proposed ISWIM as the language to do that. The major development from Landin’s work, according to Hughes, was the postulation that certain laws can be established for programs. In particular, Landin asserted that two alternate implementations of a program can be thought of as equal to each other. John Backus had led the development of the FORTRAN compiler but used the occasion of his Turing Award lecture to propose a new language that lets you combine functions and reason about them. John Hughes encouraged everyone to read the paper (although he cautioned that the first half contains the bulk of the good ideas).

In 1982, Peter Henderson published his work on functional geometry, a language for building MC Escher inspired pictures by combining functions. From a small set of functions for transposing and rotating images, he was able to construct very intricate pictures with the recursive properties so common to Escher. (This section of the talk brought to mind section 2.2.4 from SICP, though the Abelson and Sussman were manipulating a portrait of MIT’s founder William Barton Rogers.)

At this point, John Hughes presented what he sees as the four main tenets of functional programming:

  1. Operating on whole values
  2. Combining forms via composition
  3. Using simple laws to describe properties of the system
  4. Functional representations of the domain

A few more examples of these tenets were then presented:

  • In a DARPA project to build prototypes for something called a “geometric region server” in multiple languages, the Haskell solution used less than half the code of the C++ solution and was built in less time. The choice quote from the report, though, was that the use of higher order functions was “just a trick that would probably not be useful in other contexts.”
  • The use of lazy evaluation allows for the generation of infinite sequences, which was then used to demonstrate how to differentiate and integrate functions (another callback to SICP for me).
  • Separating the production of data from the consumption of data allows for further composition. For example, the implementation of an alpha-beta search can be applied to both a module that generates tic-tac-toe games and a module that generates chess games.
  • Another example cited was QuickCheck. Laws are expressed as properties of your program, and the search strategy is separate from the generation of examples to verify.

The talk concluded with an overview of using functional programming to improve the design of computer chips. In the wake of the Pentium’s floating point bug, Intel began describing their circuits with functions, including formal verification, throughout the entire process, down to the generation of masks. Chip layouts have improved and the defect rate has dropped as a result. A language out of MIT for representing FPGAs as functions has even enabled the inclusion QuickCheck in the generated FPGA itself.

The only question asked afterwards was what the Elixir community could learn from the Haskell community. I thought John Hughes had a good answer, he said that the Haskell community values elegant solutions, which you arrive at by understanding the problem, looking for the laws that govern the system and designing before you begin to code.

Applications With A Capital A

Paul Dawson

Paul’s talk addressed one of the biggest struggles with adopting Elixir: deployments. Paul posited, what if we wanted to push applications to an already running BEAM instance, instead of spinning up and tearing down servers with each deploy? How could we accomplish that?

(Paul frequently cautioned that some of what he was presenting were simply thought experiments or investigations and you should mostly just use Distillery. Since I’ve never actually deployed an Elixir app, I was struggling at times to relate to what Paul was talking about.)

A few interesting takeaways from this talk were:

  • The :code module lets you turn Elixir code into a BEAM file. You can then programmatically load that compiled file into the BEAM.
  • The :reltool module, which is used to build releases, also provides a GUI you can use to explicitly include the modules you want, e.g. to target a specific system.
  • It is possible to programmatically launch the BEAM on remote nodes (provided you have SSH access to them), then push code to all nodes.

Pour Beer With Your Face

Jeff McGehee

Jeff works at Very and was part of team that built a very cool application that uses facial recognition to unlock a self service beer tap that then charges you by the ounce. Nerves controls the hardware for connecting the tap (via a solenoid) and sending volume measurements to a screen at the tap in real time over Phoenix channels.

The system itself was very interesting, but we didn’t learn much more about it. Instead, Jeff talked about a tongue-in-cheek metric called SoDQoP which attempts to combine Speed of Delivery (how quickly your team is shipping features) with Quality of Product (the fraction of your userbase that is happy with said features). Jeff believes that maximizing the SoDQoP of your people, tools and process will keep you ahead of your competitors.

The bulk of the presentation was spent on tools, specifically Nerves. For me, the memorable opinion of this talk was that you should “look for things people aren’t doing that aren’t obviously bad.” This was along the lines of “nobody ever got fired for buying IBM,” because if you keep doing what everybody else is doing, you’ll never differentiate yourself. Nerves is clearly on the bleeding edge for hardware development and there were numerous examples of how configuration, testing, deployment, etc were all easier on Nerves than the leading alternative, yocto.

OTP and the Web: A Love Story

Hannah Howard

Hannah’s was the first talk after lunch and was predominantly an intro to managing state with OTP, specifically GenServer and Supervisors. To set the stage, Hannah explained traditional web development, which uses a stateless protocol (HTTP) and stateless server code (we rebuild our view of the world with each request). State is typically stored in the database, which then becomes the bottleneck as the service scales.

Hannah proposed three approaches to reconciling the stateful approach of OTP with the stateless approach of more traditional web development:

  1. Pretend you’re not using OTP. Phoenix uses OTP behind the scenes (each request is handled in a separate process), so you’re still leveraging the BEAM even if your code looks very similar to what you would have written in Rails.
  2. Write a regular OTP app and treat Phoenix as a thin view layer.
  3. A hybrid approach between the two. Unfortunately, this section was rushed due to time constraints and I missed Hannah’s final point. I think there’s a lot of interesting areas to explore here, though.

Building Resilient Systems With Stacking

Chris Keathley

This was a very dense and thorough talk about the Chris’ opinions on the right way to boot your Elixir application, and what the boot process reveals about your reliance on dependencies.

Resilient systems are able to recover from or adjust to misfortune or change, and all complex systems run in degraded mode nearly continually, they are able to function in the face of many flaws (sometimes due to operator intervention). In order to build these complex systems, our systems should concern themselves with three things:

  1. Handle failures gracefully
  2. Provide feedback to other systems
  3. Give insight to operators

After establishing that foundation, Chris dove into the order and manner in which an application should come online. The overarching philosophy is to do things in stages and ensure the system is in a stable and correct state after each phase.

The first step is booting the app, which involves reading the system configuration, starting the BEAM and then starting your app. This is the normal way of starting a release, and all of this is provided by Distillery. Certainly, should anything go wrong in this first phase, the app will panic and abort.

The next phase is loading the runtime config, which has been a source of much debate in the community. Chris’ favored approach is to define a keyword list of config in application.ex and pass that to a supervised GenServer. That GenServer is then responsible for loading any additional configuration and storing everything in an ETS table. Any dependencies that require configuration retrieve it by querying the GenServer directly. Should the config server encounter an error while loading, it should fail and prevent the app from booting further (since it’s hard to do anything meaningful without configuration).

The next step, starting dependencies, requires the most critical thinking and I find the hardest to speak about in generalities. In Chris’ example, the app starts by enabling a Phoenix endpoint (bound to a port specified by the config server) that returns a 500 to the load balancer, since the system is not yet ready to receive requests. It then boots up a database connection pool, but does not attempt to connect to the database. Because connections to the database may come and go, it should be possible to boot the service when the database is down. It will eventually come back and the service will connect to it then. A database connection check can be added to the healthcheck endpoint used by the load balancer, so that the service is taken out of the LB while the database is unreachable.

The next layer is alarms and notifications. Instead of booting a database manager directly, the service boots a database supervisor that includes a watchdog process to monitor the database. The watchdog tracks the current state of the database connection (up/down) and sets or clears an alarm on the transition. (Technically, the watchdog waits for 3 consecutive “UP” messages before clearing an alarm to avoid flapping back and forth.) Erlang’s :alarm_handler module allows services to control how the setting and clearing of alarms is handled, e.g. Slack, PagerDuty, dev/null.

The final step is connections to external services. For these, a circuit breaker pattern is normally used and Erlang provides one in the :fuse module (which has a neat API with terms like melt and blown). If applicable, good responses from the external service can be cached and used by the service when the external service is unreachable. (Provided, of course, it is acceptable to serve stale data.)

(An example repo demonstrating all of the above is available on GitHub.)

Chris’ overarching point was that whether a service is up or down is judged by its callers but determined by its dependencies. The design of a service needs to consider the ways its dependencies can fail and what the service should return to its callers when they do. Making those decisions in relation to the boot up process helps to determine how the system should behave overall.

You’re Doing Too Much

James Edward Gray II

James’ talk was a summary of his experience with this year’s ICFP programming competition, where the goal was to generate instructions for a nanobot to 3-D print some design. James’ team built a lot of tooling, including a simulator, but eventually lost to a team that took a much simpler approach.

It was a very interesting story and I appreciated James’ final point around the cost of dependencies. The team with the simpler approach was only concerned with manipulated lists of coordinates, while James’ team relied on, in his words, “the whole world.” The addition of every dependency increases the cognitive load of a system, because it imposes the domain of the dependency on the system. It follows that a system with more dependencies has a higher cognitive load than a system with fewer.