When the last class finished, one boy come to me and complained: "professor, when will we start learning challenging things in Haskell?
I had to explain that many programmers find very diffcult the concepts they are learning at the moment, but for them, for some reason, it was an extremely easy job.
Finally, I want to share what they have done in both years. Enjoy this post? Mobile App Development. Programming Languages. Get insights on scaling, management, and product development for founders and engineering managers. Read programming tutorials, share your knowledge, and become better developers together. Hot Topics. Software Engineer and University Professor. Published Apr 10, Today I want to share a wonderful experience.
I'm professor of introductory programming in two years at Florentino Ameghino School and two courses Functional Programming, and Data Structures at Universidad Nacional de Quilmes This time I want to speak about my work at the school.
In the 4th year the results are: Last year I had eighteen students. Only one was enthusiastic about programming. Four of them learned a lot, and they did the exercises just well. The rest were completely bored and did not want to participate too much. This year I have fifteen students. They do a very excellent job. Only four of them are lazy, but the rest is quite well. Some are extremely brillant. All students get inspired because they're truly learning programming concepts, in contrast to Scratch where they learned only to write a lot of code without understanding their programs and the ideas behind them.
However, Scratch is an excellent approach to make motivating programming projects at first, and it have good results. My concerns, like other educators, are: Why some or most children doesn't like programming? What kind of students are successful at learning programming? Who of them might have difficulties to learn this discipline? How can we motivate students at this age to learn programming? Thankfully that was not the end. As Haskell implements call-by-need semantics, it is possible to define new conditional operations.
In fact this is quite helpful when writing domain specific languages. A somewhat more useful control-structure is the cond for conditional function that stems from LISP and Scheme languages. It allows you to define a more table-like decision structure, somewhat resembling a switch statement from C-style languages:. Now we come to one of the most distinguishing features of Haskell: type classes.
In the section Polymorphic Data Types we have seen that type variables or parameters allow type declarations to be polymorphic like in:. This approach is called parametric polymorphism and is used in several programming languages. Type classes on the other hand address ad hoc polymorphism of data types. This approach is also known as overloading. We would like to be able to use characters represented by the data type Char as if they were numbers. This information details what functions a type a has to implement to be used as an instance of the Num type class.
This is all we need to know to make the type Char an instance of the Num type class, so without further ado we dive into the implementation please note that fromEnum converts a Char into an Int and toEnum converts an Int into an Char :. This piece of code makes the type Char an instance of the Num type class.
Originally the idea for type classes came up to provide overloading of arithmetic operators in order to use the same operators across all numeric types. But the type classes concept proved to be useful in a variety of other cases as well. This has lead to a rich sets of type classes provided by the Haskell base library and a wealth of programming techniques that make use of this powerful concept.
Here comes a graphic overview of some of the most important type classes in the Haskell base library:. Now we can turn some of the data types that we defined in the section on Algebraic Data Types into instances of the Eq type class.
As you will have noticed, the code for implementing Eq is quite boring. Even a machine could do it! That's why the language designers have provided a deriving mechanism to let the compiler automatically implement type class instances if it's automatically derivable as in the Eq case. With this syntax it much easier to let a type implement the Eq type class:. This automatic deriving of type class instances works for many cases and reduces a lof of repetitive code.
For example, its possible to automatically derive instances of the Ord type class, which provides ordering functionality:. If you are using deriving for the Status and Severity types, the Compiler will implement the ordering according to the ordering of the constructors in the type declaration. Two other quite useful type classes are Read and Show that also support automatic deriving.
Show provides a function show with the following type signature:. This means that any type implementing Show can be converted or marshalled into a String representation.
Creation of a Show instance can be achieved by adding a deriving Show clause to the type declaration. The Read type class is used to do the opposite: unmarshalling data from a String with the function read :. This signature says that for any type a implementing the Read type class the function read can reconstruct an instance of a from its String representation:.
Please note that it is required to specify the expected target type with the :: PairStatusSeverity clause. Haskell uses static compile time typing. At compile time there is no way to determine which type an expression read "some string content" will return. Thus the expected type must be specified at compile time. Either by an implicit declaration given by some function type signature, or as in the example above, by an explicit declaration.
Together show and read provide a convenient way to serialize marshal and deserialize unmarshal Haskell data structures. This mechanism does not provide any optimized binary representation, but it is still good enough for many practical purposes, the format is more compact than JSON, and it does not require a parser library. The most interesting type classes are those derived from abstract algebra or category theory.
Studying them is a very rewarding process that I highly recommend. However, it is definitely beyond the scope of this article. Thus, I'm only pointing to two resources covering this part of the Haskell type class hierarchy. The first one is the legendary Typeclassopedia by Brent Yorgey.
The second one is Lambda the ultimate Pattern Factory by myself. This text relates the algebraic type classes to software design patterns, and therefore we will only cover some of these type classes. In the section on declarative programming we came across two very useful concepts:. These concepts are not only useful for lists, but also for many other data structures. So it doesn't come as a surprise that there are type classes that abstract these concepts.
The Functor type class generalizes the functionality of applying a function to a value in a context without altering the context, e. As already described above, fmap maintains the tree structure unchanged but converts the type of each Leaf element, which effectively changes the type of the tree to Tree Severity. As derivation of Functor instances is a boring task, it is again possible to use the deriving clause to let data types instantiate Functor :. As already mentioned, Foldable provides the ability to perform folding operations on any data type instantiating the Foldable type class:.
Because of the regular structure algebraic data types it is again possible to automatically derive Foldable instances by using the deriving clause:. Now we will take the data type Maybe as an example to dive deeper into the more complex parts of the Haskell type class system.
The Maybe type is quite simple, it can be either a null value, called Nothing or a value of type a constructed by Just a :. The Maybe type is helpful in situations where certain operation may return a valid result. Take for instance the function lookup from the Haskell base library. It looks up a key in a list of key-value pairs. If it finds the key, the associated value val is returned - but wrapped in a Maybe: Just val. If it doesn't find the key, Nothing is returned:.
The Maybe type is a simple way to avoid NullPointer errors or similar issues with undefined results. Thus, many languages have adopted it under different names. In Java for instance, it is called Optional. In Haskell, it is considered good practise to use total functions - that is functions that have defined return values for all possible input values - where ever possible to avoid runtime errors.
Typical examples for partial i. We can use Maybe to make them total:. In fact, there are alternative base libraries that don't provide any partial functions. Now let's consider a situation where we want to combine several of those functions. Say for example we first want to lookup the divisor from a key-value table, then perform a division with it and finally compute the square root of the quotient:.
The resulting control flow is depicted in the following diagram, which was inspired by the Railroad Oriented Programming presentation:. In each single step we have to check for Nothing , in that case we directly short circuit to an overall Nothing result value.
In the Just case we proceed to the next processing step. This kind of handling is repetitive and buries the actual intention under a lot of boilerplate. As Haskell uses layout i.
So we are looking for a way to improve the code by abstracting away the chaining of functions that return Maybe values and providing a way to short circuit the Nothing cases. We need an operator andThen that takes the Maybe result of a first function application as first argument, and a function as second argument that will be used in the Just x case and again returns a Maybe result. In case that the input is Nothing the operator will directly return Nothing without any further processing.
In case that the input is Just x the operator will apply the argument function fun to x and return its result:. Side note: In Java the Optional type has a corresponding method: Optional. This kind of chaining of functions in the context of a specific data type is quite common. So, it doesn't surprise us that there exists an even more abstract andThen operator that works for arbitrary parameterized data types:.
When we compare this bind operator with the type signature of the andThen operator:. We can see that both operators bear the same structure.
We can read this type signature as:. Monads are a central element of the Haskell type class ecosystem. It's called the do-Notation. Using do-Notation findDivRoot looks like this:. This looks quite like a sequence of statements including variable assignments in an imperative language. Due to this similarity Monads have been aptly called programmable semicolons. But as we have seen: below the syntactic sugar it's a purely functional composition! A function is called pure if it corresponds to a function in the mathematical sense: it associates each possible input value with an output value, and does nothing else.
In particular,. Purity makes it easy to reason about code, as it is so close to mathematical calculus. The properties of a Haskell program can thus often be determined with equational reasoning. As an example I have provided an example for equational reasoning in Haskell. Purity also improves testability: It is much easier to set up tests without worrying about mocks or stubs to factor out access to backend layers. All the functions that we have seen so far are all pure code that is free from side effects.
The Haskell language designers came up with a solution that distinguishes Haskell from most other languages: Side effects are always explicitly declared in the function type signature.
In the next section we will learn how exactly this works. Conal Elliott. The most prominent Haskell Monad is the IO monad. We'll study this with a simple example. In an imperative language, reading a String from the console simply returns a String value e. This could be interpreted as: getLine returns a String in an IO context. So how can we use the result of getLine in a function that takes a String value as input parameter?
Making side effects explicit in function type signatures is one of the most outstanding achievements of Haskell. This feature will lead to a very rigid distinction between code that is free of side effects aka pure code and code that has side effects aka impure code. Keeping domain logic pure - particularly when working only with total functions - will dramatically improve reliability and testability as tests can be run without setting up mocks or stubbed backends.
It's not possible to introduce side effects without making them explicit in type signatures. There is nothing like the invisible Java RuntimeExceptions. So you can rely on the compiler to detect any violations of a rule like "No impure code in domain logic".
The section on type classes and on Monads in particular have been quite lengthy. Yet, they have hardly shown more than the tip of the iceberg. If you want to dive deeper into type classes, I recommend The Typeclassopedia. Therefore, most Haskell programs are immune to this class of bugs. Python is a high-level interpreted language that emphasizes code readability. This is achieved with a convolution: we move our kernel along the signal and compute sliding dot products.
For each point of the input signal, we try to overlap the kernel and cut off the edges defaulting the signal to 0 when looking outside the range.
Is this fast or slow? This means that Python by itself is a poor fit for performance-sensitive code, unless most of the work is offloaded to C functions. Oftentimes, Haskell developers use linked lists to represent sequences of values, even though it may not be the best choice of data structure for the task at hand:. So, a naive and very simple Haskell implementation outperforms the naive Python code by a factor of 50!
What would it take to get closer to the C implementation? Turns out, not much — we just have to use the right data structure: arrays with slicing provided by the vector package. A more detailed exploration of this example is available in an article by Maxim Koltsov.
If you are in the dynamic typing camp, perhaps this article by Alexis King might convince you otherwise. In any case, the primary and unquestionable advantage of static typing is that it catches some of the bugs before the software is shipped to the users. Testing can only discover the presence of certain unwanted behaviors, whereas types can guarantee their absence.
Since v is a number, it does not have any fields, and the assignment has no effect. The two lines may be far apart, so the issue can be hard to spot. And the condition p may be hard to trigger, so the tests may not catch this.
Alternatively, you could use TypeScript, which helpfully identifies the error before the browser even gets to run this code:. In this regard, Haskell is like TypeScript. The compiler will analyze the code and identify as many issues as it can. Another advantage of static types is type inference. The compiler can tell you essential information needed to use an API. How do you know what it takes as input and what it produces as output?
GHCi will report this information:. Specialise it to concrete types, such as [] and IO , and you get a pretty decent description of how to use it:. Pure functions are a joy to test and debug. They are deterministic: for equal inputs, they produce equal outputs. This empowers us to reason about parts of the system in isolation.
And it allows us to use property-based testing, one of the most powerful testing techniques. The idea is to apply pure functions to random inputs and verify their results. Due to the randomized nature of property-based tests, they tend to reveal unexpected corner cases. In Haskell, it is always valid to factor out subexpressions as a form of refactoring.
Consider this program:. Skipping the computation can save us a lot of time:. Of course, this is only possible because if then else is computed lazily: the then branch is evaluated only when the condition holds, and the else branch is only evaluated when the condition does not hold. This is also known as short-circuit evaluation. The remarkable property of Haskell is that user-defined functions also exhibit this behavior.
In an eager language, this simple refactor would lead to a major performance degradation. The expensive computation would be always performed before the whenEven call, even though its result is not needed half the time.
0コメント