The value of quitting

As I’m sipping my iced coffee in the terrace of my dear friends’ beach house in Central Greece, quarter mile from the clear, calm waters of the Euboean Gulf, resting my weary legs that pedaled me back and forth on the surface of the sea not 10 minutes ago, not worried at all about conference and journal deadlines, it dawned on me how much of an outstanding career move it was to quit my PhD back in 2016. Not just a good idea in terms of physical and mental health, but also in terms of actual career objectives.

But how can this possibly be? I quit, and everybody knows that quitters are the worst people in the world! Screw those guys!

So let’s compare and contrast my situation along some parameters to see how much I messed up my life by daring to quit.

  • Academic qualifications. I had a Master’s, I have a Master’s. Nothing lost, nothing gained.
  • Computer Science qualifications. Improved during PhD, sky-rocketed afterwards. During my grad coursework I learned some new programming languages and paradigms, linear and non-linear optimization and mostly stuff about Machine Learning that became obsolete by the time I had learned it (who cares about averaged perceptrons and kernelized SVMs anymore?). I was selected to do a PhD in Computer Vision merely because my Probabilistic Logic Programming system was applied on an antiquated video dataset (it could – in theory – be applied anywhere we have uncertain observations). Ergo, I was supposed to do a PhD in an area that I had absolutely nothing to do with, and no real desire to learn anything about. I was a logician who cared about stuff like Prolog, Markov Logic Networks and MaxWalkSat. The hell I had to do with log-concave densities and background subtraction? How long would it take me to learn the things I needed to start writing papers that might one day be eligible for publication in some second-tier Vision conference? On the other hand, during my time as a lecturer I’ve learned (and re-learned) elements of algorithms, coding principles, compilers, distributed systems, functional programming, even hardware! I’ve had top companies knock on my door to interview me for excellent positions in excellent places. When a student asks a question that I can’t answer in lecture, I don’t rest until I find an answer, which is my own research, under my own terms. During my PhD, I was a nobody in line to be Dr. Nobody, in a counter-productive environment where I was constantly feeling lost. Because I was lost.
  • Financials. From one day to the next, I quadrupled my income. Started parking funds into a retirement account. Was able to move into my own place which, you know, is something that you might need as an adult / eligible bachelor at 30 years of age? I have some money to send back home since my parents won’t ever get pensions in the failed state of Greece. I bought a car. Adopted a kitty that I’m taking care of with toys, good food, frequent vet visits. Those things help my quality of life and allow me to be happier, better rested, generally more wholesome person.
  • Health.
    • Mental: No, it hasn’t been easy. Managing hundreds of students and dozens of TAs is not something everyone can or is willing to do. Nobody trained me to do this. I was thrown into a Discrete Math class of 430 people, split across three lectures Tuesdays & Thursdays, left on my way. With vague expectations about the quality of education expected out of UMD, or what was expected off of me, an essentially foreign person who now had to teach people from all over the globe. It took its toll. But come December and May 20th, I turn a dial. No more worrying about papers and deadlines. No more useless meetings that lead nowhere. No more Skyping, fruitless collaborations, reviewing draft upon draft upon draft. Reading dozens of papers hoping that an idea will somehow come to my head. No sir / madam; if I run out of ideas / inspiration for assignments, there are books that I can consult. I have direction and help. Actual concrete direction and help.
    • Physical. Fact: in the US, if you’re outside NYC, you need a car. If you disagree, you’re wrong. No longer waiting on bus stops when it’s 90 degrees and ridiculously humid (that’s Maryland for you), or 10 degrees and… well, 10 degrees. No longer walking to the campus gym and back with my oversized rucksack under these conditions, making going to my workout…. a workout. In the car I go, toss the bag in the most geometrically obscene way possible, blast the A/C, hit my off-campus gym like a maniac, drive back in style. I have access to better food, with organic vegetables and fruits that actually taste like something. Because in this system and this country, a McDonald’s cheeseburger costs $1.50 and a Sweetgreen salad costs $10 (I wonder why there’s an obesity epidemic in low-income neighborhoods). I couldn’t even afford looking at those foods as a grad student. Now, on shopping days, I visit up to 3 different food stores to get what I need!
  • Professional status. People are impressed by titles. I’m not, I don’t care at all (seriously, wtf is a “Solutions Architect?”), but people generally do. Whenever I’m asked what my job is, people think I’m a genius and published in every conference & journal there is. After a while, I stopped trying to convince them I’m not, and just smile and nod. Recruiter messages arrive at my inbox every day. When I was a graduate student, nobody cared. Of course, this has the benefit of not having to mark a ton of e-mail as “read” or avoiding awkward social encounters (yeah… uh… you know… a lecturer is not a prof… I don’t do any research…. not a bigshot of any kind… uh…. nevermind), but in the priority queue of problems, those are close to last.
  • Social life. I have one!
  • Music. Music and, in particular, the bass guitar, is important to me. A penniless graduate student can barely afford a Squier guitar and 50W practice amp. Also, the closest bass guitar lessons I could find are at a non-public-transportation distance from my area of work and (grad student – era) residence. It’s not that you can’t do it. You can. But after the umpteenth time that you get shat on by a pigeon waiting in the bus stop, or you get a nerve pulled trying to carry a 13lb 5-string Ibanez BTB in and out of the bus doors, well, that stuff wears you down, no matter what you want to do (exercise, music, anything). You want to maintain your passions without wasting time on useless procedural things (this is the chief reason I left Greece).
  • Actual free time. I’m the master of my post-5pm life. This is huge to me. I no longer have to worry about not worrying about research. My goals are far more concrete, and I have a way of self-evaluating and evaluating my team. How did the students do in the midterm? They tanked it, you say? Well, is this perhaps on me? Did I perhaps assign a large point value to a problem the basics of which I didn’t cover well? Gotta curve appropriately. Did a TA or two fail to cover the same basics in their discussion sessions? Need to have a talk with those TAs. I have actual, measurable goals and concrete metrics / numbers that tell me how I’m doing. None of the haziness that I – honest a God -could not stand during my time in the PhD. Is this how I’m supposed to approach thisis this actual Zero-Shot Learning… what do I need to know to do this… No. Not for me, thank you very much. I need concrete, actionable feedback. Furthermore, it is up to me to do financial planning such that I don’t need to teach over the winter or the summer and I can go on trips instead! Yes, I could absolutely go ahead and sign up for a mortgage, but why? I don’t want to burden the next 15 years of my life with extra work so that I pay for a property I’m only likely to use as a financial commodity! My biggest financial commodity is my healthy body and mind. And why even enter the trouble of contacting appraisers, loan officers, property managers… I could give a crap about how sound of an investment it might be to own a house. It’s clear to me that nobody learned anything from 2008 and it’s gonna happen again and again and again. Why waste time being a cog in the machine when I could instead rent a cheap apartment in a non-gentrified area and eat an additional commute to work? After all, how bad is that commute when I listen to my favorite albums on the road every morning? I could give a crap about hardwood floors or laminated countertops. I can assure you my cat doesn’t care either. It only cares about being cute, eating and shitting. This isn’t Flip or Flop; it’s my life, and it’s ending one second at a time. The most sound investment I can make is to not make unsound investments and instead park money in retirement accounts for when I’m too weak to work (if I’m lucky enough to reach that age).

The majority of the 3 people who will read this post will undoubtedly go ahead and quit their jobs / marriages / whatever they think makes them sad, hoping that they are no longer sad, and once they realize nothing’s changed, they will wonder what that inflated bag of douche was saying on the internet and rage. This is of course not the essence of this post. The essence is that this whole “never quit” business can sometimes be lip service. Sometimes, the best thing to do is quit. Maybe the goal was BS from the beginning, or maybe the investment is no longer worth it. Maybe you’ve unearthed an alternative path in your pursuit of a goal that doesn’t seem that interesting anymore. In fact, this was another one of my own reasons. I can’t even describe how much more satisfying teaching is for me when compared to research.

But it’s not for everyone. Not everyone can quit. Especially when quitting means betraying all that you thought you were or you were gonna be, and also endangers the immigration status that is keeping you away from your home country which is enmeshed in a web of decades-long socio-economic fail. It takes courage to quit, and not everyone can do it.


A simple Rational API in Scala

This post is meant for Java programmers who are curious about what Scala has to offer them. It is often assumed that there is little reason to learn Scala ever since Java 8 included lambda expressions and streams. I find that assumption to be false, and with this example I hope to explain some of the reasons why.

I have been following the online course Functional Programming in Scala, offered by EPFL. In typical Jason fashion, instead of actually sticking to my deadlines and providing deliverables on time, I took one of the examples and extended it to my liking over a period of weeks. So let’s see how we would build an API for a rational number in Scala. As a reminder, a rational number is any number that can be written in the form a/b, where a and b are integer numbers, b non-zero. The goal of this exercise is to (a) Play around with Scala, showcasing that even the most basic features of the language can provide great benefits on readability and extensibility of code, and (b) Investigate whether our new library can offer time and accuracy improvements during computations that involve many rational numbers, when compared to the alternative that the compiler will do by default: generate the Double value that approximates a / b as good as it can within 64 bits, and work with those approximations. The entirety of the code is available under GPL here (src/main/scala/weel2/rational). Let’s begin.

Initialization and basic methods

In Scala, we are allowed to execute statements inside a class’ body. Those statements make up the default constructor for the class. For our Rational type, we will do a neat trick: in order to make the various computations faster, we will make sure we reduce the fraction as much as possible during construction. For example, if the user requests of us to create the fraction 10/15, our implementation will reduce that to 2/3, as follows:

class Rational(x:BigInt, y:BigInt) {

  import Rational._

  require(y != 0, "Cannot create a Rational with denominator 0")

  // Simplify representation by dividing both numer and denom by gcd.
  private def gcd(a:BigInt, b:BigInt) : BigInt = if(b == 0) a else 
                                             gcd(b, a % b)
  private val gcd : BigInt= gcd(x, y)
  private val numer = x / gcd // Make immutable
  private val denom = y / gcd

require will throw an IllegalArgumentException if the caller requests a rational with a denom of 0. The tail-recursive method gcd(a, b) will calculate the greatest common divisor of x and y, and then the constructor will consequently divide x and y and store the resulting values in numer and denom respectively. We use BigInt instead of Int for our inner representation so that we can deal with fractions of really large integers.

Of course, we can define other constructors if we would like, as in Java. For example, since all integers can be expressed as rationals (just give a denom of 1), we can do this:

def this(x:BigInt) = this(x, 1)

Furthermore, and this will be important for our experiments, we want to force the program to output the “pure” representation of our Rational instances instead of dividing them all the time, so we override toString as follows:

override def toString : String = numer + "/" + denom 

Such that, instead of println(20/100) printing 0.2, it will now print 1/5. Feel free to browse the code for overridings of equals and hashCode.

Operator overloading

Among the most discussed drawbacks of Java is lack of operator overloading, something that C++ programmers in particular may find limiting. In Scala, pretty much any alphanumeric string can be a method name, as long as it begins with a non-numeric character. Without doubt, for our rational number API, it would be very convenient for us if we could completely transparently overload the meaning of the classic binary arithmetic operators: +, -, *, /, etc. In our example, we overload all of those, as well as unary –, the caret operator (^) to denote exponentiation, as well as binary comparison operators (>, <, <=, etc). Here are some examples.

def + (that:Rational):Rational  = {
  require(that != null, "Rational + Rational: Provided null 
  new Rational(this.numer * that.denom + that.numer * this.denom, 
                                      this.denom * that.denom)

def + (that:BigInt): Rational = this + new Rational(that, 1)

def * (that:Rational):Rational = {
    require(that != null, "Rational * Rational: Provided null 
    new Rational(this.numer * that.numer, this.denom * that.denom)

def * (that:BigInt) = new Rational(numer * that, denom)

We use BigInt instances for the numerator and denominator representation since the stress tests we show later on can produce very large integers. In the example above, given two Rational instances a and b we define what the behavior of + should be in the expression a + b. We also define the behavior of a + i, where i is a BigInt instance. To re-use the existing implementation for a + b, a + i is translated to a + i/1, which is already defined.

For unary operators, Scala requires that we prefix the name of the operator with the keyword unary, to disambiguate them from binary operators:

def unary_- : Rational = new Rational (-numer, denom)

Binary comparison operators:

def > (that: Rational) :Boolean = numer * that.denom > that.numer * denom

def >= (that:Rational) :Boolean = (this > that) || (this == that)

def > (that:BigInt) : Boolean = this > new Rational(that, 1)

def >= (that:BigInt) : Boolean = {
  val r = new Rational(that, 1)
  (this > r) || (this == r)   // Scala calls "equals" when "==" is        
}                            //invoked

def < (that:Rational) : Boolean = !(this >= that)

def <= (that:Rational) : Boolean = !(this > that)

def <(that:BigInt): Boolean = {
  !(this >= that)

def <= (that:BigInt):Boolean = {
  !(this > that)

Re-using existing overloadings as much as we can. It’s a huge syntactic relief to be able to implement functionality via universally recognized operators instead of having to implement ugly methods like plus, minus, multiply, subtract,… And as we will see later, it makes it possible to alter the semantics of an entire computation chain just by changing one line of code!

“No” to bloated pattern-matching operators!

Notice in the code above that we overload the overloaded operators themselves by writing a different method for every type of argument. This seems a bit tedious at first, and one might think: isn’t it more of a “functional” approach to take an argument of type Any and then pattern match against it? Like our overriding of the equals method does:

private def canEqual(x:Any):Boolean = x.isInstanceOf[Rational]

private def sameRational(that:Rational ): Boolean = 
               (numer == that.numer) && (denom == that.denom) ||
               (numer == -that.numer) && (denom == -that.denom)

override def equals(that:Any):Boolean = {
  that match {
    case that:Rational => that.canEqual(this)  && sameRational(that)
    case that:BigInt => sameRational(new Rational(that, 1)) 
    case _ => false 

For example, we could do something like this:

def + (that:Any):Rational  = {     
      require(that != null, "+(Rational, Any): Provided null 
      that match {         
          case that:Rational => new Rational(this.numer * that.denom 
                                         + that.numer *  this.denom, 
                                           this.denom * that.denom)         
          case that:Int | BigInt => new Rational(this.numer + that * 
                                    this.denom, this.denom) 
          case that:Double => /* ...*/
           case _ => throw new UnsupportedOperationException
                   ("+(Rational, Any): Unsupported operand.")      

Based on the discussion here, I was able to ascertain that this would be a terrible idea, because it would change a compile-time error to a runtime error. For instance, with our current implementation which only overloads + for BigInt and Rational arguments, the code snippet:

val y = new Rational(10, 11)
y + 2.4

does not compile:

Demonstration of compile-time error when attempting to use an operator on unknown types. String is also an expected type because it’s possible to concatenate the output of Rational::toString with the string literal “2.4” to produce the concatenated String”10/112.4″.

On the other hand, if we implement + in the way above, the code would be prone to a runtime error, which is of course not preferred. So no matter how Java-esque and tedious it might seem, the multi-method way seems like the better solution.

Lazy evaluation of expressions that throw

The keyword lazy val in Scala does exactly what you think: it defines an expression head = body where body is evaluated only at the time of call of head. This can sometimes lead to interesting behavior. In the companion object to Rational, we can define some constants:

object Rational {

  val ONE = new Rational (1, 1)

  val ZERO = new Rational (0, 1)

  val MINUSONEOVERONE = new Rational(-1, 1)

  val ONEOVERMINUSONE = new Rational(1, -1)

  lazy val ZEROOVERZERO = new Rational(0, 0)

Note that the last constant, representing the undefined form 0/0, is a lazy val. Its RHS is a constructor call with both arguments set to zero. However, as a reminder, whenever a denominator of zero is provided to the Rational constructor, we make sure to throw an instance of IllegalArgumentException:

require(y != 0, "Cannot create a Rational with denominator 0")

Since the constant is lazy, its RHS, which leads to the exception throwing, will not be evaluated until after the companion object has been brought to life! This means that the class Rational itself has the capacity to internally define how it wants exceptional instantiations to behave! In this case, by “exceptional” instantiation we are referring to the undefined form 0/0 which has been encoded as a constant in the companion object, but we could imagine all sorts of classes with exceptional instantiations that might need to be documented! Especially for applications which monitor global state, where variables are mutated irrespective of what’s going on in the current thread’s call stack, having the capacity to define special cases of a type can be powerful!

Since all the statements within the body of the companion object have to be evaluated (unless lazy!) before the companion object can be brought to existence, stripping away the lazy keyword from the assignment’s LHS would not allow for object construction when requesting the constant ZEROOVERZERO, or any other constant for that matter. Feel free to pull the code, take away the keyword lazy from the line that defines ZEROOVERZERO and see that either one of those statements will throw:

object LazyEval extends App {

import Rational._
 // ONEOVERMINUSONE // This won't work if `ZEROOVERZERO` *ISN'T* lazy
 // ZEROOVERZERO    // Or this

It turns out that it is not possible to emulate this behavior in Java, that is, have an expression that is part of a type (so, having to be evaluated at construction time!) be evaluated lazily. The following type:

public class Uninstantiable {

    public static int badField = willThrow();

    public static int willThrow()  {
        throw new RuntimeException("willThrow()");


Despite the fact that we only define a static field and method, is indeed uninstantiable:

public class UninstantiableRunner {

    public static void main(String[] args) {
        // Cannot instantiate;
        new Uninstantiable();

Now, care must be paid to ensure that nobody thinks we are making the wrong claim. It is not the claim that lazy behavior cannot be implemented in Java. It absolutely can, and it’s a well-known and widely – used pattern. Here’s a simple example:

public class JavaLazyVal {

    private long val;

    // Pay once.
    public long getVal() {
        if (val == 1) {
            System.out.println("Performing an expensive 
            val = (long) Math.pow(3, 10);   // Assumed expensive. Have 
                                           // to ensure that value 
                                          // CAN'T be equal to the 
                                          // initializer value!
        } else {
            System.out.println("No longer performing expensive 
        return val;

    public JavaLazyVal() {
        val = 1;

A simple runner program:

public class JavaLazyValRunner {

    public static void main(String[] args) {
        JavaLazyVal jlv = new JavaLazyVal();
        System.out.println("val = " + jlv.getVal());
        System.out.println("val = " + jlv.getVal());

And its output:

Lazy behavior of getVal() exemplified.

So it is absolutely possible to have lazy behavior in Java and this is a very known pattern for Java programmers. It’s not the claim that this can’t be done. The claim is three-fold. First, in Scala you can do this much, much more concisely:

object LazyEval extends App {
  lazy val x = {  // Entire scopes can be rvalues of Scala expressions     
    println("Evaluating x")
    scala.math.pow(3, 10)
  println(x) ; println(x)

/* Output:
 * Evaluating x
 * 59049.0
 * 59049.0

Second, the Java code that we had to – necessarily – introduce is open to one possible source of logical error: that the expensive computation also returns the initializer value! In some applications, this can be hard to ensure!

Third, with the mechanism of lazy val, you can store expressions which would otherwise prohibit the construction of a class instance! And that can have power.

Tail-recursive repeated squaring

This isn’t in any way Scala or Functional Programming – specific, just something cool that can be written rather consisely in this language / paradigm. We overload the exponentiation operator (^) to perform repeated squaring with a tail-recursive method:

def ^ (n:Int):Rational = {

    // Inner two-arg function pow/2. Not tailrec: 
    // the inner method pow/4 is tailrec.
    def pow(base:BigInt, exp:Int):BigInt = {
      require(exp >= 0 && base >=0, "We need positive integer 
                      arguments for this method.")
      // Power computation with tail-recursive repeated squaring.
      def pow(currExp:Int, maxExp:Int, currTerm:BigInt, 
                          prodAccum:BigInt): BigInt = {
        assert(currExp <= maxExp, "The current exponent should never 
                  surpass the original one.")
        if(currExp <= maxExp / 2)
             // Next iteration on current term.
             pow(2*currExp, maxExp, currTerm * currTerm, prodAccum)    
        else if (currExp == maxExp) 
               // Bottomed out.
             currTerm  * prodAccum  
           // Compute residual product (rest of terms) 
           // using the same method.
            pow(1, maxExp - currExp, base, currTerm * prodAccum)  

      if (base == 0 && exp == 0) throw new IllegalArgumentException
                                       ("0^0 is an undefined form.")
      else if(base == 0 && exp > 0) 0
      else if(base > 0 && exp == 0) 1
      else if(exp == 1) base
      else if(base == 1) 1
      else pow(1, exp, base, 1)   // Call to tailrec pow/4

    if(n == 1) this
    else if(n == 0) ONE

    // Calls to (non-tailrec) pow/2
    else if(n < 0) new Rational(pow(denom, -n), pow(numer, -n))
    else new Rational(pow(numer, n), pow(denom, n))


The innermost 4-arg method pow is of interest here. The snippet (3 / 2)^19 ends up translated to the calls pow(1, 19, 3, 1) for the numerator 3^19and pow(1, 19, 2, 1) for the denominator 2^19 in the line else new Rational(pow(numer, n), pow(denom, n)). Observe that 3^19 = 3^16 * 3^2 * 3^1. Through the machinery of repeated squaring, the first if condition in pow/4 iterates (literally) over all the possible values of the current exponent that wouldn’t “surpass” the maximum exponent if squared. Through 4=log_2(16) iterations, it will calculate the value of 3^16 and then, using the recursive call pow(1, maxExp - currExp, base, currTerm * prodAccum), it can take the intermediate computation into consideration through the product currTerm*prodAccum.

Stress testing

So, all this is cool and all, but how do we fare in practice? We want to make two measurements:

(1) How well does our Rational type perform in terms of accuracy and time? To measure this, we run two map-reduce rasks on a large chain of rational numbers under both the Double and Rational representations, and compare results. We generally expect that we will be winning in accuracy but losing in time, since we have to spend significant time on calls to the Rational constructor.

(2) How well does our exponentiation method scale when compared to scala.math.pow, in terms of speed?

To measure time we used the following method presented here:

def time[R](block: => R, msg:String): R = {
    val t0 = System.nanoTime()
    val result = block    // call-by-name
    val t1 = System.nanoTime()
    println("Elapsed time for " + msg + " was " +  (t1 - t0) + "ns")

And use the following script for the first experiment:

                     MAP-REDUCE ========== ");

final val rng = new Random(47)
final val MAX_ITER = 1000 // Vary 
final val MAX_INT = 100   // Vary 
val intTupleSeq : Seq[(Int, Int)]= for (_ <- 1 to MAX_ITER) yield  
                                  (rng.nextInt(MAX_INT) + 1, 
                                        rng.nextInt(MAX_INT) + 1)
val quotientSeq : Seq[Double] = intTupleSeq map { case (a, b) =&gt; a / 
                                                 b.toDouble }
val rationalSeq : Seq[Rational] = intTupleSeq map { case (a, b) =&gt; 
                                       new Rational(a, b) }
// Sums first....
val quotientSum = time( quotientSeq.sum, "quotient sum")
val rationalSum = time(rationalSeq.reduce((x, y) =&gt; x+y), "Rational 
println("Value of quotient sum:" + quotientSum)
val evaluatedRationalSum = rationalSum.eval
println("Value of Rational sum:" + evaluatedRationalSum)
println("Error: " + Math.abs(quotientSum - evaluatedRationalSum))

// Products second...
val quotientProd = time( quotientSeq.product, "quotient product")
val rationalProd = time(rationalSeq.reduce((x, y) =&gt; x*y), "Rational 
println("Value of quotient product:" + quotientProd)
val evaluatedRationalProd = rationalProd.eval
println("Value of Rational product:" + evaluatedRationalProd)
println("Error: " + Math.abs(quotientProd - evaluatedRationalProd))

Keep in mind that the bigger the error, the better the case for our Rational type, since up to the point of the call to eval it has admitted zero representational degradation.

(i) Map-Reduce

We run two simple Map-Reduce tasks: sum and product over a list of 1000 randomly distributed strictly positive Ints. We vary the largest possible Int and generate time and error metrics for each.

Sum times:

We see that generating the chain of sums for the Rational type takes about seven-fold more time than that of generating the simple primitive Int sum. This is to be expected, since there are costs associated with claiming memory from the heap and performing several assignments. We should also remember that the Rational constructor performs the GCD reduction aggressively during construction, and the runtime of that algorithm is affected by the magnitude of the bigger of the two integers for which it is called.

What about accuracy?

Remember: the greater the error, the better for our Rational type. It seems as if we have to go all the way to the 12th decimal digit, in a 64-bit machine, to find any appreciable difference.

When it comes to the product task:

Now this is somewhat surprising, since it seems that the Rational type’s difference in execution speed is even greater than the case of sum. Once again, here is the source for +(Rational, Rational) and *(Rational, Rational).

def + (that:Rational):Rational  = {
    require(that != null, "Rational + Rational: Provided null 
    new Rational(this.numer * that.denom + that.numer * this.denom, 
                   this.denom * that.denom)

def * (that:Rational):Rational = {
    require(that != null, "Rational * Rational: Provided null 
    new Rational(this.numer * that.numer, this.denom * that.denom)

This code describes the formulae for addition of a / b + c / d and (a/b) * (c/d). We see that in the constructor call of + we pay 4 additions / multiplications, whereas in the constructor call of multiplication * we pay 2 mults. So it’s definitely not the number of operations, but the cost of those operations, particularly when BigInt instances are involved. And, in a product formulation, BigInts can get really big.

What about product error?

Smaller benefits than the sum (note the different scale, from 10^-12 to 10^-16), to the point of not even being appreciable for small values! No idea why this happens. In all reality, I should have averaged all of those graphs over a number of calls to generate higher fidelity points.

One thing to consider as well is that in order to even produce error graphs, we need a way to evaluate the Rational instance analytically:

def  eval: Double = numer.doubleValue() /  denom.doubleValue()

(ii) Repeated squaring for exponentiation calculation

Just for fun, I elected to run an experiment to see how our overloading compares to scala.math.pow. Since a^n can be computed in log_2n steps using repeated squaring, we vary the exponent n and keep the base at 3/2 (1.5 in the Double representation). The computations are of course accurate (Error = 0). We are exclusively interested in time.


  println("=========== EXPERIMENT 2: EFFICIENCY OF 
                   EXPONENTIATION ========= ")
  final val EXPONENT = 500
  var quotientPower: Double = 0.0
  var rationalPower : Rational= _
  time(quotientPower = scala.math.pow(1.5, EXPONENT), "Double raised 
                             to the power of " + EXPONENT)
  time(rationalPower = new Rational(3, 2) ^ EXPONENT, "Rational raised 
                            to the power of " + EXPONENT)
  println("Value of power from Doubles: " + quotientPower + ".")
  println("Value of power from Rationals: " + rationalPower.eval + 
  println("Error: " + Math.abs(quotientPower - rationalPower.eval))


It is laughable to even assume that a custom implementation would in any way compare to scala.math.pow, but what is interesting is the fact that both implementations appear to scale very well as the exponent is increased. This is to be expected because of the power of the logarithm, which “tempers” even large values of n to very tractable values of magnitude ceil(log_2n).


In this post, we saw several advantages of the Scala implementation of a rather simplistic numerical type over the relevant Java implementation. The big advantages have to do not with the runtime, but with the source. In Scala, we are able to overload operators naturally, like with C++, albeit with more intuitive syntax and the usual perks of not having to deal with raw pointers or memory cleanup (if ever required in the overloading of some operator…). We can also use the power of built-in lazy vals in order to do cool things such as allow for types to define their own exceptional instances and deal with them in any way they please (even dynamically). We also compared two different ways of overloading methods in Scala, and deduced that it would be far better to actually emulate the Java-like way of several methods with the same name, instead of pattern – matching on an argument of type Any, as is the more “natural” way in Scala. Finally, we saw a simple application of the language’s Syntax and its emphasis on tail recursion and inner methods with our repeated squaring method.

In my view, the big advantage of this entire experiment is the elegance and modularity with which we can switch from Double to Rational and vice versa. An example can be found in the object Runner:

object Runner extends App {

  private def fraction(a:Int, b:Int) = {
    require(b!= 0, "Cannot create a fraction with denominator zero.")
     new Rational(a, b)
     // a.toDouble / b.toDouble

  val a = fraction(5, 6)
  val b = fraction(2, 3)
  val c = fraction(1, 2)
  val d = fraction(10, 11)
  val e = fraction(25, 15) // Rational will reduce to 5/3
  val f = fraction(23, 13)
  val g = fraction(230, 17)
  println(a * b + c*d - e * (f + g))

The output of the above code as given is: -535765/21879, whereas, if we uncomment the final line of fraction(Int, Int), we have -24.487636546460074, the evaluated fraction. And the only thing we need change is the inclusion or exclusion of the character string //.

All that said, I would definitely not implement numerical software on top of the JVM. C / C++ / Native Assembly is the only way to go if we want to go that low-level. Furthermore, it is not clear to me, a person who is not professionally involved with commercial – grade numerical software, how a 12th decimal digit difference in accuracy can in any way be significant. Perhaps NASA and SpaceX care about such small differences, to make sure that autonomous vehicles land on Mars and not Venus. Perhaps not. Perhaps the advantage of being able to move to smaller bit widths (Arduino, IoT devices) would ratify the use of a “pure” numerical type such as our Rational. The matter of symbolic computation is huge, interesting, important, and with this blog post we are not even making a ripple on the surface.

Breaking @tailrec

To follow through this example, you will need to know some basic (really basic) Scala. You will also need to know what call-by-name and call-by-value means in the context of functional programming. Those are concepts very loosely related to “pass-by-value” and “pass-by-reference” in imperative programming; you should not confuse them. tl;dr at the end.

I have recently been learning Scala and functional programming in general. Recursive exercises are a great warm-up to make the imperative programmer start thinking in a more recursive manner, so one of the first few examples I wrote was factorial. I realized I needed to understand quickly how the language treats tail recursion, and factorial is a great exercise for that.

The following intuitive one-liner is, unfortunately, not tail recursive, but at least the stack grows linearly with n:

def factorialNaive(n:Int) = if(n == 0) 1 else n * factorialNaive(n-1)

So a tail-recursive implementation is quite appropriate here, and the following pair of functions will get the job done in exactly such a manner:

def factorialTailRec(n: Int) : Int = {
    def factorialTailRec(n: Int, f:Int): Int = {
      if (n == 0) f
      else factorialTailRec(n - 1, n * f)
    factorialTailRec(n, 1)

Now at this point it’s important to remind ourselves that the @tailrec annotation, which can be used after we import scala.annotation.
tailrec, is not required, but is a very good idea to include since it will warn us at compile-time if the recursive call is not in tail position. If we were to use that annotation for factorialNaive, the code would not compile:

Now here is where an evil idea creeped into my mind. I noticed that I was passing the second parameter of factorialTailRec by value. I thought to myself: “This means that every “stack frame” (more like iteration scope at this point) is burdened with one multiplication… so all the way down the call chain we have n-1 multiplications. The alternative, of passing the parameter by name, would delay any multiplication up to the last stack frame, where we bottom out with a term of 1. So I would expect similar performance.”

Turns out, the above is only partly true, and the most interesting fact is not included in it! You see, while it is the case that the product computation is delayed until the very end, its terms are embedded within every one of the stack frames by the anonymous function that the called-by-name parameter builds! The evaluation of every multiplication i * (i+1) with 0 <= i < n, even when i+1 can be evaluated within the current stack frame, requires popping another stack frame, a frame that was previously pushed by the otherwise tail-recursive call that made a call-by-name for its 2nd argument!

To test this, we can use the following runner program. Note that I don’t care at all about the actual factorial values, so I let the result overflow and be as non-sensical as it likes. I’m not even assigning it anywhere. I’m interested exclusively in how the code affects the use of the JVM’s stack. For the hyper-exponential factorial function, even the Long data type is not sufficient and one had best use an efficient BigInteger library or Stirling’s approximation if they care about computing the values of large factorials.

package factorial

import scala.annotation.tailrec

object Factorial extends App {

  val ITERS = 100000      // Let's push things

  // Naive Factorial
  def factorialNaive(n:Int) : Int =  if(n == 0) 1 else n * factorialNaive(n-1)

  try {
    for (i <- 1 to ITERS) factorialNaive(i)
    println("Naive factorial worked up to " + ITERS + "!.")
  } catch{
    case  se:StackOverflowError => println("Naive factorial threw an instance of StackOverflowError.")
    case e: Exception => println("Naive factorial threw an instance of " + e.getClass + " with message: " + e.getMessage + ".")

  // Tail-recursive factorial
  def factorialTailRec(n: Int) : Int = {
    def factorialTailRec(n: Int, f:Int): Int = {
      if (n == 0) f
      else factorialTailRec(n - 1, n * f)
    factorialTailRec(n, 1)

  try {
    for(i <-1 to ITERS) factorialTailRec(i)
    println("Tail recursive factorial worked up to " + ITERS + "!.")
  } catch{
      case  se:StackOverflowError => println("Tail recursive factorial threw an instance of StackOverflowError.")
      case e: Exception => println("Tail recursive factorial threw an instance of " + e.getClass + " with message: " + e.getMessage + ".")



Notice that in factorialTailRec, the accumulator argument is passed by value. Running this program for the given parameter of ITERS=100,000 yields the output:

Naive factorial threw an instance of StackOverflowError.
Tail recursive factorial worked up to 100000!

Nothing surprising so far. But what if I were to pass the second argument by name instead, by changing the method declaration to:

def factorialTailRec(n: Int, f: => Int): Int = ...

Then, our output is:

Naive factorial threw an instance of StackOverflowError.
Tail recursive factorial threw an instance of StackOverflowError.

Everybody has my express permission to print the above output and paste it on their office doors, perhaps appropriately captioned. Not to mention that @tailrec did not complain at all! The file compiled just fine. I’m not sure whether it’s possible to have @tailrec disallow call-by-name parameters. It’s one of those things that sound easy to do, but are probably very difficult in practice.

Now let’s play a little game. Suppose that for whatever reason you cannot change the signature of factorialTailRec and you are stuck with an idiotic call-by-name that has doomed your method to be tail-recursively constructing a linear-size stack for a function that it itself builds… 😦 Is there anything we can do to achieve call-by-value?

Yup. This:

def factorialTailRec(n: Int) : Int = {
    def factorialTailRec(n: Int, f: => Int): Int = {
      if (n == 0) {
        val fEvaluated = f
      else {
        val fEvaluated = f
        factorialTailRec(n - 1, n * fEvaluated)
    factorialTailRec(n, 1)

Which is a cheap general template to emulate call-by-value from a called-by-name parameter if you need it. Right now with my limited Scala experience, I can’t envision a scenario where this would be preferable to a simple call-by-value declared at the method signature level, but hey, it works:

Naive factorial threw an instance of StackOverflowError.
Tail recursive factorial worked up to 100000!

Finally, to once again underscore the difference between val and def, changing the two lines val fEvaluated = f into def fEvaluated = f does nothing to evaluate the value of the parameter, since the defs themselves are calls by name (aliases)! In fact, I’d be willing to bet that it doubles the constant factor in front of the O(n) space occupied by the stack.

Naive factorial threw an instance of StackOverflowError.
Tail recursive factorial threw an instance of StackOverflowError.

tl;dr I built a tail-recursive method in Scala that blows up the JVM’s stack.


Pearl Jam‘s second album, Vs, contains a song called Rats (along with several other masterpieces of their catalogue). The song features a beautiful bassline on E minor, as well as some interesting lyrics. It is those lyrics that came to mind recently.

The song essentially attempts to compare the behavior of rats with that of human beings. The first verse goes:

They don’t eat, don’t sleep
They don’t feed they don’t seethe,
Bare their gums when they moan and squeak
Lick the dirt off a larger one’s feet

From the get-go, the lyrics are weird. The first line appears to elevate a negative characteristic of rats and explains that humans are unlike that (the assumption here is that we sleep more regular hours and eat more regular meals than rats). But the second line is unclear. Humans clearly seethe, and so do rats. The third and fourth ones clearly describe behaviors of rats.

Second verse:

They don’t push, don’t crowd
Congregate until they’re much too loud
Fuck to procreate till they are dead
Drink the blood of their so-called best friend

Well clearly this stanza chastises humans, with the possible exception of the third line, since I have the feeling that rodents tend to reproduce much more often than primates.


They don’t scurry when something bigger comes their way
Don’t pack themselves together and run as one
Don’t shit where they’re not supposed to
Don’t take what’s not theirs, they don’t compare

This is definitely in favor of humans except…

Verse #3:

They don’t scam, don’t fight
Don’t opress an equal’s given rights
Starve the poor so they can be well fed
Line their holes with the dead one’s bread


Chorus again, twice this time:

They don’t scurry when something bigger comes their way
Don’t pack themselves together and run as one
Don’t shit where they’re not supposed to
Don’t take what’s not theirs, they don’t compare

Bridge to outro (not sure if this is even a correct term):

They don’t compare, rats
They don’t compare, rats
They don’t compare

And outro proper:

Ben, the two of us need look no more (x6)

In all likelihood a reference to the namesake character of this movie.

As one goes through this track, it’s hard not to “keep score”. Assuming that some of the more confusing lines are either pro-rat or pro-human, one starts making imaginary scoreboards in their mind.

– Maybe we need to split the first one 3/4 for humans and 1/4 for rats…

-Last line of chorus is a snide remark on humans thieving….

-Doesn’t the bridge favor rats since one would typically use the phrase “They don’t compare” in a positive way for whoever “they” is?


Until at some point, hopefully sooner rather than later, you realize that you are forced to count votes in order for humans to win a moral battle against rats.

On character

(And / or the inherently unquantifiable nature thereof.)

Of diamonds, flair, millions, bleach white teeth smiles, linen robes and endless miles of red carpets,

of 4 out of 4 GPAs and thousands of references, athletic prowess, spectacular extracurriculars,

of shining silver eyelids covering black eyes without irises, pink fonts on top of chopped moments describing what to see, and short sentences feeding you the need for more,

of fame for the sake of being famous, instead of for something that you did not for the sake of being famous, but because it was par for the course, whatever course that you have otherwise clearly set on yourself,

of comfortable people walking through life like a Markov process, no past queues, nothing in their past to draw pain from, and cry and learn, yet always a persistent avoidance of pain,

I’ve seen thousands.

Of pairs of piercing eyes, mouths that seldom speak, yet say so much,

of silence screaming, iterative self-doubt, pain-shaped cheeks,

of rebirth through disappointment, of mistake after mistake braved, regretted, braved again

of soccer cleats kicking dirt around on a dry May day, alone

of cigarette smoke emanating towards a purple August night sky, huffed from powerful, powerful lungs

of genius homeless dropouts with collections of pirated math texts

of cores that guide and give purpose, hearts that are armed and ready to get scraped

I see way too few.

Το δικό μου “χάρτινο”

Η πρώτη φορά που ασχολήθηκα με την ΑΕΚ σοβαρά ήταν τη χρονιά 2002-03 (η χρονιά της Ριζούπολης). Έξι ισοπαλίες στο Τσάμπιονς Λιγκ, πόλεμος στον Μπάγεβιτς, Ψωμιάδης με φουσκωτούς στο σπίτι του Ντέμη και σακούλες με λεφτά στις προπονήσεις, Νοέμβρης / Δεκέμβρης με 5 σερί ήττες που τελείωσαν τη χρονιά εντός συνόρων, 12 σερί νίκες στο δεύτερο γύρο με διπλά σε Τούμπα και Ριζούπολη, υπεροψία με Μάλαγα, μπαλάρα και τερματισμός δύο; Τρεις βαθμούς πίσω;

Ναι, τα θυμάμαι έντονα. Βλέπετε, τα χρόνια εκείνα μια νίκης της ομάδας σου σήμαινε μια καλύτερη Δευτέρα όταν ξεκίναγες με Μαθηματικά Κατεύθυνσης και μετά το πρώτο (μεγάλο) διάλειμμα, Αρχαία. Εσένα ήταν ο νους σου ακόμα στο γκολ στο ντέρμπι… Πώς να μη θυμάμαι τη χρονιά τέλεια; Πώς να μη θυμάμαι την επόμενη χρονιά επίσης, όταν σχεδόν διαλύθηκε η ΑΕΚ, και τερμάτισε οριακά πάνω από το Αιγάλεω εν μέσω αντιδράσεων για το αν έπρεπε να συμμετάσχει καν στο Ουέφα… Αντιδράσεων από το Αιγάλεω, παρακαλώ.

Πέρασαν πολλά χρόνια. Για μένα, για την ΑΕΚ, για την Ελλάδα. Το 2007-08, στο “ζενίθ” της προσπάθειας του Ντέμη, ως πρόεδρος πια, να μας φέρει πίσω στην κορυφή, τη χρονιά του Ριβάλντο και του Αρουαμπαρένα, τη χρονιά της υπόθεσης Βάλνερ και του “χάρτινου”, το πρωτάθλημα το χάσαμε, και ξεκίνησε η παρακμή. Σαν αυτούς τους στρατούς στους Παγκόσμιους Πολέμους που υπερέβαιναν τις γραμμές ανεφοδιασμού τους χωρίς, πάντως, να κατακτήσουν το στόχο. Δεν είχαν ελπίδα, δεν είχε και η ΑΕΚ.

Αλλά ποιός από τους ΑΕΚτζήδες αλήθεια θυμάται το πόσο ΧΑΛΙΑ έπαιζε η ΑΕΚ τότε, ιδιαίτερα τηρουμένων των αναλογιών, ενός συνόλου με Ριβάλντο, Μπλάνκο, Εντίνιο, Λυμπερόπουλο, … Αρουαμπαρένα, ….  Ταμαντανί Ενσαλίβα… που πέρα από το γκολ με τη Μπόλεσλαβ δεν έκανε τίποτα άλλο. Ποιός θυμάται το ότι ο καλύτερος παίκτης της στον πρώτο γύρο ήταν ο Παπασταθόπουλος, που σκόραρε και με τον Άρη στη Θεσσαλονίκη; Ποιός θυμάται το παιχνίδι με τον Ηρακλή στο ΟΑΚΑ, όπου έχασες τα άχαστα, μαζί και τους πρώτους σου βαθμούς; Όταν φάνηκε το ότι ήσουν ακόμα μικρή ομάδα, γιατί οι μικρές ομάδες χάνουν ευκαιρίες, ενώ οι μεγάλες κερδίζουν όταν είναι χειρότερες;

Ποιός θυμάται μια φάση από το εντός έδρας παιχνίδι με τη Βιγιαρεάλ, με το σκορ είτε στο 0-0 είτε στο 0-1, πρώτο ημίχρονο, στην οποία ο τερματοφύλακας (Μορέτο;) δίνει με τα χέρια στον Τόζερ, αυτό το “μαγυάρικο άτι”, για τον οποίο μας είχαν πρήξει τα ούμπαλα οι δημοσιογράφοι; Μια φάση στην οποία ο Τόζερ κινείται προς τα μπρος με τη μπάλα, και σε κάποια φάση ένας Ισπανός (μάλλον επιθετικός, half field pressing των αδιάφορων Ισπανών γαρ) κινείται για να του πάρει τη μπάλα, και εκείνος αντί να δώσει πάσα πλάγια, ή τουλάχιστον να επιχειρήσει τρίπλα, ΑΠΛΑ ΣΚΟΥΝΤΗΣΕ ΤΟΝ ΕΠΙΘΕΤΙΚΟ ΤΗΣ ΒΙΓΙΑΡΕΑΛ ΚΑΙ ΤΗΝ ΕΧΑΣΕ;

Με εκείνη τη φάση κάτι μέσα μου “έσπασε”. Η ομάδα ήδη είχε δώσει τραγικά στοιχεία μην όντας καν ανταγωνιστική κόντρα στη Σεβίλλη, στα προκριματικά (απλά ανταγωνιστική, δε μπορούσε ποτέ καμία Ελληνική ομάδα να κοντράρει εκείνη τη Σεβίλλη), και κερδίζοντας αυχενικά στο πρωτάθλημα. Στη δε ρεβανς των προκριματικών του Ουέφα κόντρα στην Σάλτσμπουργκ, με τον πλαστικό χλοοτάπητα, μάλλον τυχερά φύγαμε με μόνο 1-0, καλά να ήταν το 3-0 του πρώτου. Και μόλις είδα εκείνη τη φάση, πραγματικά κάτι μέσα μου “έσπασε”, και πήρα διαζύγιο από την ΑΕΚ.

Μετά από ένα χρόνο (για την ΑΕΚ, Δώνης, Σκόκο, αρχή παρακμής) πήρα διαζύγιο και από την Ελλάδα. Τρεις μέρες μετά τη δολοφονία του Γρηγορόπουλου πήγα να διαμαρτυρηθώ στο Κέντρο με πολλούς συμφοιτητές μου. Η πορεία γρήγορα χαβαλεδιάστηκε, με καμένα αυτοκίνητα πέρα δώθε στις εισόδους των Εξαρχείων, σπασμένες βιτρίνες, βεβηλωμένες τράπεζες και εκκλησίες. Για να μπορέσουμε να αναπνεύσουμε, κάτσαμε στα Προπύλαια τα οποία είναι ελαφρώς υψωμένα σε σχέση με την Πανεπιστημίου.

Ένα διαμέρισμα στην Κοραή καιγόταν. Δε θα μου έκανε εντύπωση καμία αν ήταν κάποιο φροντιστήριο με παιδιά μέσα: υπήρχαν πολλά στην περιοχή και Δευτέρα βράδυ είναι ώρα λειτουργίας για τα φροντιστήρια. Φτάνει ένα πυροσβεστικό από την πλευρά της Ομόνοιας, και ένας κουκουλοφόρος βγαίνει στην Πανεπιστημίου, δείχνει την παλάμη του για να κάνει τον οδηγό να σταματήσει. Ο οδηγός σταματάει. Σε στυλ Grand Theft Auto, ο κουκουλοφόρος τον βγάζει έξω από το πυροσβεστικό και πετάει μολότωφ μέσα.

Πάντα ήθελα να κάνω το κάτι παραπάνω στη ζωή μου όσον αφορά Ακαδημαϊκά / Μαθηματικά (λόξα μου, γούστο μου, καπέλο μου), αλλά οποιοαδήποτε προοπτική με είχε να κάνω Μάστερ ή Διδακτορικό στη Θεσσαλονίκη, στο ΕΜΠ ή στα Χανιά χάθηκε εκείνη τη στιγμή. Η Ελλάδα, εκείνη τη στιγμή τελείωσε για μένα.

Ταυτόχρονα, η ΑΕΚ ακολουθούσε την αναπόφευκτη κατρακύλα της προς τον υποβιβασμό. Μια αστεία ομάδα που δεν άξιζε το 12-13 να μείνει στην κατηγορία. Και καλώς έπεσε, καθώς το να έχεις Κατίδηδες στην 11άδα δε σε κάνει ελκυστική ομάδα για μένα. Ο κυνισμός μου και για την ΑΕΚ και για την Ελλάδα εξελίχθηκε σε φιλοσοφία, φιλοσοφία την οποία ασπάζομαι ακόμα και σήμερα.

Η 2η Απρίλη του 18 είναι μια ιδιαίτερη όμως μέρα. Χθες η ΑΕΚ κέρδισε τον Παναθηναϊκό 3-0 σε στυλ μεγάλης ομάδας: μεγάλη απόκρουση του τερματοφύλακα όταν σε παίζουν για την πλάκα, και γκολάρα από το πουθενά. Το ότι δώθηκαν δύο κίτρινες κάρτες σε παίχτη του Παναθηναϊκού (Ινσούα; Ρε, οι Παναθηναϊκοί βρίζατε το Μπασινά, και τώρα έχετε Ινσούα;) στο πρώτο ημίχρονο σε κανένα άλλο πρωτάθλημα δε θα θεωρείτο “έλλειψη σεβασμού”. Έκανες μαρκαρίσματα για κίτρινη; Πάρε 2 και φεύγε αγόρι μου. Εκεί εστιάζουν οι ελάχιστοι “άλλοι” σήμερα, και όχι π.χ στο ότι ο Βράνιες έκανε φράγμα μέσα στην περιοχή όταν το σκορ ήταν στο 1-0 (οπότε πιστεύω ότι θα έπρεπε να δωθεί έμμεσο) και χέρι μέσα στην περιοχή ενώ το σκορ ήταν 2-0 (πέναλτι-μαρς). Ο Βράνιες, ο οποίος κατά τα άλλα είναι το καλύτερο σέντερ μπακ της χρονιάς για πλάκα.

Η ΑΕΚ είναι φέτος η καλύτερη ομάδα στο πρωτάθλημα. Ο ΠΑΟΚ είναι 8 πίσω από την ΑΕΚ αλλά 9 πίσω από τον τίτλο. Οπότε η ΑΕΚ, με 4 παιχνίδια να απομένουν, με Πλατανιά, Λεβαδειακό, Κέρκυρα και Απόλλωνα Σμύρνης, στην παλιά γειτονιά, εκεί που έζησα τα φοιτητικά μου χρόνια και εκεί που ακόμα μένει ο πατέρας μου, πρέπει να πάρει 4 βαθμούς. Ούτε η μικρή ΑΕΚ που έχουμε συνηθίσει τα πολλά τελευταία χρόνια μπορεί να το χάσει αυτό το πρωτάθλημα.

Ένα πρωτάθλημα το οποίο όλοι σωστά αναφέρουν ως  γελοιοδέστατο, από το ΠΑΟΚ-Ολυμπιακός και έπειτα, όμως. Καθώς μέχρι τότε ήταν “το καλύτερο των πολλών τελευταίων ετών” και “με ανταγωνιστικό ΠΑΟΚ, Ολυμπιακό που δεν το καθαρίζει από το Νοέμβριο” κτλ. Καθώς για πλάκα ξεχνάμε. Ένα πρωτάθλημα στο οποίο όσο ήταν υποφερτό, πάλι η ΑΕΚ ήταν η καλύτερη ομάδα. Ένα πρωτάθλημα το οποίο η ΑΕΚ το αξίζει.

Για μένα η ΑΕΚ ήταν πάντοτε ανώτερη από την Ελλάδα. Τα πολλά τελευταία χρόνια είμαι έξω, και δε νομίζω ότι θα γυρίσω σύντομα. Δεν έχω ευκαιρίες να μιλήσω Ελληνικά. Το να φωνάζω “ΓΚΟΛ” και “ΑΝΤΕ ΓΑΜΗΣΟΥ” σε μια τηλεόραση στην οποία προβάλλω παράνομο στριμ της πλάκας που βρίσκω στο Reddit είναι οι μόνες μου ευκαιρίες, πέρα από κάποιους ελάχιστους Έλληνες φίλους στην περιοχή. Έχω χαθεί με την Ελλάδα, μέρα με τη μέρα μου φαίνεται όλο και περισσότερο σαν ξένη χώρα.

Αλλά αυτή η κιτρινόμαυρη φανέλα μου προξενεί ακόμα ρίγος, χαρά, λύπη. Πιο πολλή λύπη, έχουμε συνηθίσει στις κατραπακιές, αυτές που προξενούσε η παράγκα και αυτές που προξενούσε η ανικανότητα της ομάδας για πολλά χρόνια. Ακόμα και σήμερα με κάνει να σκέφτομαι τα Μικρασιατικά εδάφη, το 1921 που, 100 χρόνια αφού ξεκινήσαμε να πάρουμε την Πελοπόννησο, χάσαμε για πάντα το Αϊβαλί, τη Νίκαια, την Προύσα, τη Σμύρνη, την Κωνσταντινούπολη. Σε εποχές τέτοιες, σκέψεις σαν αυτές λέγονται σκέψεις ακροδεξιές. Δε μπορώ να πείσω τον αναγνώστη περί του ότι είναι διαφορετικό το να είσαι Τραμπ / Μιχαλολιάκος / Πετέν και το να εύχεσαι τα πράγματα να ήταν διαφορετικά, ώστε τώρα η χώρα σου να είχε διπλάσιο μέγεθος, να είχε το Βόσπορο, να είχε τις πρώτες ύλες, τα λεφτά, την υποδομή για να μην είναι εκ των φτωχών συγγενών της Ευρώπης σήμερα. Για να μην έχει υπάρξει απαραίτητη ανάγκη να φύγουν όλα τα νεαρά παιδιά έξω, για να γλιτώσουν από τους βλαχογκιαούρηδες που διοικούν αυτή τη χώρα.

ΑΕΚ > Ελλάδα. Είναι ακόμα μικρή ομάδα και θέλει μεγάλη απογοήτευση και ρίσκο για να ξαναγίνει μεγάλη. Δεν το πήρες με Τσάρτα και Νικολαΐδη, αλλά το παίρνεις με Λιβάγια και Αραούχο. Παίκτες ανύπαρκτους που απλά “κόλλησαν” και πήραν το πρωτάθλημα. Ένα πρωτάθλημα το οποίο είναι της πλάκας και μάλλον θα συνεχίσει να είναι της πλάκας. Αλλά για μένα μικρή σημασία έχει: γιατί κάπου στο θυμικό μου υπάρχει ακόμα ο μικρόκοσμος της Κουντουριώτου και της Βεΐκου, του σχολείου μου, του φροντιστηρίου μου, του πιο δύσκολου ξυπνήματος της ζωής μου την επομένη του Γκρασχόπερς – ΑΕΚ 1 – 0 (συντρίμ τιμ, 2003-04).

Το δικό μου “χάρτινο” είναι ένα πρωτάθλημα που η ΑΕΚ έχασε γιατί έπαιζε χάλια. Δέκα χρόνια μετά το χάρτινο και με έναν υποβιβασμό ενδιάμεσα, παίρνει στα ίσα το πρωτάθλημα με Μάνταλο και Γιόχανσον τραυματίες όλο το χρόνο, αήττητη στο Γιουρόπα και με απίστευτη εμφάνιση στο Κίεβο. Τι άλλο να θέλει κανείς;

A better number sequence for teaching Cantor’s diagonalization.

I’ve been teaching CMSC250: Discrete Structures in CS UMD for a while now. Essentially a more CS-friendly version of Discete Mathematics, with more logic, set theory, structural induction, countability. Countability, a rather esoteric subject, is challenging for our students. Naturally, a student that doesn’t get countability because they don’t get bijections cannot be helped until they understand bijections, but when it  comes to the Cantorian diagonalizing proof that the reals are uncountable (to be precise, that the interval [0, 1]) is uncountable), one can lose many more students.

I conjecture that one possible reason for this is that the sequence of numbers that is used to create  the 2D matrix is usually a bit arbitrary in both texts and slides. For example, in my own slides, I have this following sequence of numbers:

Screenshot 2018-02-28 12.50.38

With such a sequence of numbers, I have observed that even medium-to-strong students oftentimes have trouble. So let’s try to improve our example. Here’s a nice trick: Write down only the diagonal portion of the listing of reals:

r_1 = 0. \mathbf{2}xxxxxx\dots

r_2 = 0.x\mathbf{4}xxxx\dots

r_3 = 0.xx\mathbf{2}xxx\dots

r_4 =\mathbf{9}xx\dots

r_5 = 0.xxxx\mathbf{0}x\dots

r_6 = 0.xxxxx\mathbf{8}\dots

\ \vdots \qquad \vdots \qquad \vdots \qquad \vdots

Since the diagonal values are the only ones that you need to construct the number of interest, go ahead and construct it. I call it r' (r “prime”) only because I would like to use r later.

r' = 0.353019\dots

And now we will go replace all of those x‘s with the same-index digits of the number itself. For example, we will replace the 2nd, 3rd, 4th, 5th and 6th digit of r_1 with 5, 3, 0, 1 and 9 respectively. We will also replace the 1st, 3rd, 4th, 5th and 6th digits of r_2 with the relevant digits of r', and so on and so forth:

r_1 = 0. \mathbf{2}53019\dots

r_2 = 0.3\mathbf{4}3019\dots

r_3 = 0.35\mathbf{2}019\dots

r_4 = 0.353\mathbf{9}19\dots

r_5 = 0.3530\mathbf{0}9\dots

r_6 = 0.35301\mathbf{8}\dots

\ \vdots \qquad \vdots \qquad \vdots \qquad \vdots


The benefit of carefully constructing the number sequence in this way is that now the student can see that those numbers, no matter how hard they try to look like our own constructed number r', they fail in at least one decimal digit. In fact, with this visualization of the first 6 decimal digits for every real r_j, there is a difference of exactly one decimal digit. We have essentially pushed the argument to its limits: even if we had a real r_k whose 1st, 2nd, 3rd, …. (k-1)th, (k+1)th, (k+2)th, …. digits are the same, r_{k_k} is guaranteed, by construction of r', to be different from a_k.

The final piece of the argument can perhaps be shown as follows: The statement “[0, 1] is countable”, can be re-worded as: “For every real r in [0, 1], there is some positive integer j such that r = r_j.

The way I like to proceed from that point onward is the following: Since there is some positive integer j such that every single r \in [0, 1] can be written as r_j, then this is also the case for our constructed real number r' since, clearly, by the way that I have constructed r', it is a number between 0 and 1!

Maybe this j is equal to 1. But this can’t be, since we notice that the numbers differ in the first decimal digit, again by construction. And this is where the construction comes to help: all the other (visible) digits are the same. However, it suffices for there to be a difference in a single digit in order for us to say that, for a given j,\ r \neq r_j.

Maybe j = 2. The first digit is ok, it is 3 in both r_2 and r', But the second digit gives us a difference, despite the fact that all other digits are the same!

Maybe j = 3, and so on and so forth.

So the point is that this enumeration actually provides a Discrete Mathematics student with more intuition about why the proof works.