Daniel Näslund's thoughts and writings

Why should children program - a review of Seymor Paperts Mindstorms

27 Aug 2016

I have done some programming exercises from code.org together with my six year old daughter. Why? I have mixed feelings about putting a child in front of a computer: On one hand I’m worried about the attention addiction that I see in her peers, they spend way too much time in front of their tablets and tv:s. Do I really want my daughter to start using the computer at this early age. I hear some parents argue that learning to use a computer is a valuable skill, but what does that mean? Understanding the machine? Understanding common UI idioms? Understanding how to access games?

I guess, I’m hoping for her to get a head start, I want her to be able to build things on her own, not just consume what others have created. But what exactly is it that I want her to build? I see her and her younger brother spending a lot of time with their Lego bricks, creating houses, boats, cars and fantasy castles. What is it beyond that, that I’m hoping for her to learn by using a computer?

I’ve viewed computer programming as a rational, logical endeavour that requires precision and a lot of upfront planning. That doesn’t sound like an activity for a child. Sounds more like a sure way of killing someones imagination and turning them into the very machines they’re intended to program.

And when I watch her friends using the tablet in kindergarten, I see educational apps that introduces numbers, letters and simple logic. But couldn’t those subjects have been taught just as well without a computer? I see them stare at the screen, hypnotized, following the instructions of the program. When they play with their Lego bricks, everyone is physically active, participating by improving the design. But at the tablet, there’s only one passive user and a couple even more passive observers. What are they learning in front of that screen? Who is in charge; they or the machine?

With those questions in mind, I started reading Mindstorms.

Papert says in the foreword that the book intends to use math and the logo turtle as examples of how to create a learning environment that encourages self-learning. He mentions how living in France is more effective for learning French, then taking French classes. In the same way, living in “Mathland” would be more effective than just learning math through the regular route. Through Logo and the Turtle, the child can create his/her own micro world

One thing that really resonated with me was when Papert discusses how the machine can help with heuristic thinking. I’ve looked upon computer interactions as consisting of two opposite perspectives; one is the deeply logical step-by-step interaction with the terminal and the other is the whimsical point and click, where the user is easily distracted by whatever shows up in his web browser his screen. He navigates by loose associations, following the very first impulse that arrives. In my mind I’ve labelled the terminal interaction as good, the user makes up a plan in his head an executes it on the machine. But Papert instead emphasizes the heuristic thinking part; we don’t know upfront which way will lead to the correct answer (or if there is a correct answer), we just want to somehow get a feel for the problem. The sort of user interfaces I’ve been preferring does not provide any slack in that regard. You’re either right or wrong. After reading Mindstorms I’m beginning to see how even for mathematics nothing is really cut in stone.

The book describes how kids can learn math better by having an object-to-think-with, a connection between the abstract and the concrete. When I look at people using the computer, I see them adjusting themselves to the limitations of the computer and it’s peripherals; they’re touch typing instead of scribbling; following UI instructions instead of experimenting; searching instead of thinking. When I’ve watched my daughter using the Squeak-like environment of code.org, I notice that printing something on the screen does not have the same effect on her, as creating something with her hands. I can see the usefulness of creating the micro worlds, but for a truly engaging environment there should be some interaction with the outside.

One central theme of the book is the notion that what one can learn is limited by the models one has available; learning names of cities on a map is an easier thing to do, then accepting that a paper can be used to represent the geography of the physical world. I wonder how many such models there are in my heads? Paperts says that children are small theory builders, they have lots of models in their heads for how the the world fit together. He mentions that many children when asked “who creates the wind?” says “the trees”. That absolutely makes sense, they can wave their arms and feel the wind, why wouldn’t the wind be created by the trees? I asked my three year old son that question and sure enough; the answer was the wind. When I asked him: where to the seeds come from that turns into plants, he answered: “the earthworms put them there, they have a large supply and they transport them through small ducts”. It makes me wonder: what’s more important, to be correct or to be able to create many theories?

Papert asks the same question? Do we always have to be right or wrong? Aren’t we always partly right and partly wrong? A learning environment should encourage stepwise refinements; come up with an initial suggestion for a solution and iterate on that.

So what have I learned? I’ve widened my views about the pros and cons of computer interaction for children, things aren’t all black. And I’m beginning to think that the question should not be “how make a programmer out of a kid?”, but instead “how make the kid a maker?”. Yes, the graphical programming environments of squeak and a like will help a children develop a procedural thinking, and probably makes the step to regular textbased programming easier, but is that the end goal? Learning that it’s ok to learn on your own and that it’s ok to draw your own conclusions looks like a better ambition. After reading Mindstorms, the lectures and labs on code.org looks a bit to rigid and teacher-led.

What are the next steps for rr?

06 Mar 2016

Robert O’Callahan is leaving Mozilla to work on rr-related technology. I’ve been following the rr project at a distance as a user and very casual contributor. I got curious, what will happen next? Here are my free-wheeling thoughts on possible directions.

The rr tool only runs on Linux; porting it to Mac would be hard but might be doable; porting to Windows would require close cooperation with Microsoft, but maybe it’s within reach as well. Being able to use it on programs that share memory with other non-recorded programs can’t be done today but might be possible if some sort of kernel api for virtual memory write notifications were introduced into Linux. Adjusting the builtin scheduler to increase the chance of reproducing bugs is probably something that can give good returns. Running rr on ARM is not possible due to the semantics of their atomics instructions - perhaps other platforms might be doable - power8, mips?

All those questions above are interesting but I’d like to focus on something else: What would be the optimal debugger interface? The debugging process for me boils down to: How can I find the causes of a problem and what are the reason that these causes do exist? Reverse execution greatly speeds up the process of connecting a symptom to a cause. But it does not automatically provide more context into why the system has reached this state. It still demands that the programmer has a lot of knowledge about the program in their head to make the correct conclusions.

To make my point: Imagine using rr as an instruction-level debugger with no knowledge of symbols whatsoever. It would be possible to record the execution of a program, and later run it forwards or backwards. We could inspect the stack, registers and the value of the instruction pointer. If we can keep all the state of our program in our head, then we can set hardware watchpoints on the appropriate addresses and find our root cause.

With symbols, it’s much easier to understand call stacks and keep track of long lived state. I can explore the values of key data structures during the execution of the program; call functions to get deeper insights and modify variables to do experiments. But I’m still bound by gdb’s notion of a program as an instruction stream; a set of registers and a stack. In many cases, I’m more interested in the messages passed between two objects; a trace of important interactions. And I’m also bound by gdb’s “stop the world and peek” mode of operation. I want to see the interactions happen, not just look at them at a fixed point in time. It’s like the insights you can gain by watching visualizations of algorithms versa just inspecting the code.

For example: When I’m learning a new code base I often run Google’s pprof sampling profiler on an execution, to build a map of often called components. I then inspect those components by setting breakpoints when running the program again. I get a sense of connections between components, but I don’t get any visual help in understanding the execution in time. How can rr provide a better way to understand the execution of a program over time? Can it help me build that mental map of the execution? Perhaps the debugger can infer some properties of the program from just running it; lifetime of objects; major interactions between threads, clustering of calls shown as hot/cold call paths. If we store all events that has happened, then we could use as much time as we want (within reasonable limits of course) to compute more detailed information about the execution trace.

What if you could open the debugger and click on object A and Object B and ask that all function calls from A to B and B to A should be shown to me on a time axis. Or if you could mark instance variables a, b, c and say: Whenever any of these values changes I want to see a new state visualized on a time axis. What if we could do a bunch of tiny experiments like that?

So knowing where you are in the system, is sort of the key part I miss from debuggers today. Timeline visualizations would be one way of addressing it. Some sort of map would be another. Perhaps a reachability analysis of a statement? What things are affected by this assignment? And seeing how different threads interact would be another very useful visualization to have. The challenge lies in making such a system as malleable as the gdb textual interface. How can we see more context but still preserve the precision in our queries?

In order to understand the code better, you might need more information than what is present in the debug symbols for your language. What to do then? Gdb provides python bindings for formatting compound data types and printing frames. What other options are available for a debugger? Should the source code be annotated, like is done for rtos tracing or for dtrace and systemtap probes? How can key data structures be visualized like is done in DDD? What to do about JIT code?

Debugging to me, consists of doing a bunch of experiments and evaluating the result. Could the debugger aid in that? Perhaps for each experiment, annotations could be added to the trace? If the traces became portable, then one developer could send his trace, with annotations included, to another developer to show what he had tried.

Can rr somehow minimize the source code involved in triggering a bug, a.k.a. delta debugging? Can rr chunk a large trace into a smaller one; if we have a three hour recording where the bug is triggered after 2:55, can rr somehow create a trace file that has compressed all the previous events into something much smaller?

I guess, what I’m talking about in this post is some sort of overlap between tracers, logging, profilers and debuggers. The first two provides continuous reporting while the later two samples the execution. Breakpoints and watchpoints gives me a chance to see details. Having a stored execution trace enables us to create a database from which we can conduct even more precise queries about the state of the program. That may be useful, but that extra information can also be useful for creating more fuzzy contexts visualizations that help us understand a complex system.

Ubuntu 15.10 on Dell XPS 15

05 Mar 2016

Buying a computer with the latest hardware can be a bit risky for a Linux user. The Dell XPS 15 9550 model is equipped with the latest Intel generation, Skylake; has a SSD that communicates over the new NVME, PCIe interface and on top of that has a relatively new Nvida graphics card. Luckily, I found a ubuntuforums.org thread with close to 300 posts that describes the installation pitfalls.

I installed Ubuntu 15.10 by following steps 1-7 from a post by jchedstrom. Then I installed the 4.4 Ubuntu kernel by following a post by eXorus. Some posts described problems when using the nvidia-355 driver together with the 4.4 kernel, recommending nvidia-352 instead. I have no idea what the difference between those are, I just followed the steps in this post by Ji_Balcar

The above steps has given me a system that has good performance, reasonable battery time when used with the Intel graphics activated (more than five hours) and can be used with external monitors and can restart after being suspended. So far I’ve only connected one external monitor and I’ve only sparingly been running with the Nvida card activated since it consumes more power; roughly a battery time around three hours.

Below follows some notes about tweaks I’ve made to the Ubuntu desktop environment. In the past I’ve used the tiling window manager XMonad, but I found that the hassle of keeping my configuration in sync with Ubuntu upgrades was not worth the small increase in productivity. So now I’m trying to do the minimal amount of adjustments, but some adjustments are still needed.

I removed the default created directories under /home/$USER, then edited ~/.config/user-dirs.dirs to use my own names. Disabled online search via System Settings => Security & Privacy => Search tab. Added partner repositories Software & Updates => Other Software tab. Checked in all boxes.

Ubuntu has virtual desktops disabled out of the box. I’ve enabled them by opening System Settings => Appearence => Behaviour and ticking in the Enable Virtual Desktops checkbox. It only has four desktops out of the box, but I’ve increased that to the maximum six by running these commands.

gsettings set org.compiz.core:/org/compiz/profiles/unity/plugins/core/ vsize 1
gsettings set org.compiz.core:/org/compiz/profiles/unity/plugins/core/ hsize 6

The Dell XPS 15 9550 has a 4K monitor - the fonts and menus looks tiny, very tiny. To fix that I opened Settings Manager => Displays and adjusted the Scale for menu and title bars to be 2 instead of the default 1.

I did a reset of the locale by editing /etc/default/locale (I always mix up the keyboard layout with the locale setting when running the Ubuntu installation-program - I want the Swedish keyboard layout, but English menus and error messages).

LANG=en_US.UTF-8
LC_MESSAGE=POSIX

I’m a Vim user, and for easy switching between command and insert mode, I’ve remapped the Esc button to Caps Lock and visa versa. I’ve also remapped one menu-button to ctrl. In earlier versions of Ubuntu, I’ve done that using the .Xmodmap file, but that one wasn’ read by the X server in Ubuntu 15.10. Looks like Xmodmap has been superceeded by the XKB system. Searched around but didn’t immediately figure out where to place the configuration directives, so I put them in .profile:

setxkbmap -option altwin:ctrl_win
setxkbmap -option caps:swapescape
setxkbmap -option terminate:ctrl_alt_bksp

Gnome resets upon changes. Will this work? http://askubuntu.com/questions/363346/how-to-permanently-switch-caps-lock-and-esc

I use gnome-terminal as my terminal emulator. It has builtin support for the Solarized colorscheme. I’m using the light theme as my default.

Add myself to dialout group, in order to access the serialport

sudo usermod -a -G dialout $USER

Installed sysstat and activated sar by editing /etc/default/sysstat

ENABLED="true"

Disable notify-od software updates available https://askubuntu.com/questions/773874/disable-gnome-softwares-notification-bubble-notify-osd-for-available-updates