Irish Computer Science Leaving Certificate Curriculum Consultation Update

Last Tuesday I attended a consultation session for the Leaving Certificate Computer Science Curriculum. This is Ireland’s shot at putting CS on the pre-university curriculum, specifically the Irish Senior Cycle – which leads right up to where secondary school and university meet. I am particularly interested in this as I teach, research, and am pretty much obsessed with CS1 – the first programming course that CS majors take at university. I am also teaching this year on a new programme at my university, University College Dublin (with support from Microsoft), that is one of the first (if not the first) teacher training programmes specifically for this new curriculum.

The event was hosted by the National Council for Curriculum and Assessment, and was addressed by Irish Minister for Education and Skills, Richard Bruton. It was an engaging and lively day of discussion and it was really good to see so many different stakeholders in attendance. I was in one of I believe 6 (or more) focus groups, and we had university professors, industry leaders (including Apple and Microsoft), current (and former) school teachers, and a member of the curriculum development team in the room, (and I am missing a few people here).

There is another consultation event on September 16 at Maynooth University, hosted by the Computers in Education Society Ireland (CESI). The consultation officially closes on September 22, and a final draft of the curriculum is expected soon thereafter.

The enhancing compiler error messages saga: the saga continues

I was sorry to have missed Raymond Pettit, John Homer and Roger Gee presenting the latest installment of what is becoming the enhanced compiler error messages saga at SIGCSE earlier this month. Their paper “Do Enhanced Compiler Error Messages Help Students?: Results Inconclusive” [1] was two-pronged. It contrasted the work of Denny et al. [2] (which provided evidence that compiler error enhancement does not make a difference to students) and my SIGCSE 2016 paper [3] (which provided evidence that it does). It also provided fresh evidence that supports the work of Denny et al. I must say that I like Pettit et al.’s subtitle: “Results Inconclusive”. I don’t think that this is the final chapter in the saga. We need to do a lot more work on this.

Early studies on this often didn’t include much quantifiable data. More recent studies haven’t really been measuring the same things – and they have been measuring these things in different ways. In other words, the metrics, and the methodologies differ. It’s great to see work like that of Pettit et al. that is more comparable to previous work like that of Denny et al.

One of the biggest differences between my editor, Decaf, and Pettit et al.’s tool, Athene, is that Decaf was used by students for all of their programming – practicing, working on assignments, programming for fun, even programming in despair. For most of my students it was the only compiler they used – so they made a lot of errors, and they all were logged. Unlike Denny et al., my students did not receive skeleton code – they were writing programs, often from scratch. On the other hand, Athene was often utilized by students after developing their code on their own local (un-monitored) compilers. Thus, many errors generated by the students in the Pettit et al. study were not captured. Often, the code submitted to Athene was already fairly refined. Pettit et al. even have evidence from some of their students that at times the code submitted to Athene only contained those errors that the students absolutely could not rectify without help.

As outlined in this post, Denny et al. and I were working towards the same goal but measuring different things. This may not be super apparent at first read, but under the hood comparing studies like these is often a little more complicated than it first looks. Of course these differences have big implications when trying to compare results. I’m afraid that the same is true comparing my work with Pettit et al. – we are trying to answer the same question, but measuring different things (in different ways) in order to do so.

Specifically, Pettit el al. measured:

  1. the number of non-compiling submissions; similar to did Denny et al., but unlike me
  2. the number of successive non-compiling submissions that produced the same error message; Denny et al. measured the number of consecutive non-compiling submissions regardless of why the submission didn’t compile, and I measured the number of consecutive errors generating the same error message, on the same line of the same file
  3. the number of submission attempts (in an effort to measure student progress)
  4. time between submissions; neither Denny et al. nor I measured time-based metrics

I also did a fairly detailed comparison between my work and Denny et al. in [4] (page 10). In that study we directly compared some effects of enhanced and non-enhanced error messages:

In this study we directly distinguish between two sets of compiler error messages (CEMs), the 30 that are enhanced by Decaf and those that are not. We then explore if the control and intervention groups respond differently when they are presented with these. For CEMs enhanced by Decaf the control and intervention groups experience different output. The intervention group, using Decaf in enhanced mode, see the enhanced and raw javac CEMs. The control group, using Decaf in pass-through mode, only see the raw javac CEMs. Thus for CEMs not enhanced by Decaf, both groups see the same raw CEMs. This provides us with an important subgroup within the intervention group, namely when the intervention group experiences errors generating CEMs not enhanced by Decaf. We hypothesized that there would be no significant difference between the control and intervention groups when looking at these cases for which both groups receive the same raw CEMs. On the other hand, if enhancing CEMs has an effect on student behavior, we would see a significant difference between the two groups when looking at errors generating the 30 enhanced CEMs (due to the intervention group receiving enhanced CEMs and the control group receiving raw CEMs).

As mentioned, the metrics used by Pettit et al. and Denny et al. are more common to each other than to mine. Pettit et al. and Denny et al. both used metrics based on submissions (that is, programs) submitted by students, or the number of submission attempts. This certainly makes comparing their studies more straight-forward. However it is possible that these metrics are too ‘far from the data’ to be significantly influenced by enhanced error messages. It is possible that metrics simply based on the programming errors committed by students, and the error messages generated by these errors are more ‘basic’ and more sensitive.

Another consideration when measuring submissions is that just because a submission compiles, does not mean that it is correct or does what was intended. It is possible that some students continue to edit (and possibly generate errors) after their first compiling version, or after they submit an assignment. These errors should also be analyzed. I think that in order to measure if enhancing error messages makes a difference to students we should focus on all programming activity. I’m afraid that otherwise, the results may say more about the tool (that enhances error messages) and the way that tool was used by students, than about the effects of enhanced error messages themselves. I am sure that in some of my research this is also true – after all my students were using a tool also, and this tool has its own workings which must generate effects. Isolating the effects of the tool from the effects of the messages is challenging.

I am very glad to see more work in this area. I think it is important, and I don’t think it is even close to being settled. I have to say I really feel that the community is working together to do this. It’s great! In addition there may be more to do than determine if enhanced compiler errors make a difference to students. We have overwhelming evidence that syntax poses barriers to students. We have a good amount of evidence that students think that enhancing compiler error messages makes a positive difference. Some researchers think it should too. If enhancing compiler error messages doesn’t make a difference, we need to find out why, and we need to explain the contradiction this would pose. On the other hand, if enhancing compiler error messages does make a difference we need to figure out how to do it best, which would also be a significant challenge.

I hope to present some new evidence on this soon. I haven’t analyzed the data yet, and I don’t know which way this study is going to go. The idea for this study came from holding my previous results up to the light and looking at them from quite a different angle. I feel that one of the biggest weaknesses in my previous work was that the control and treatment groups were separated by a year – so that is what I eliminated. The new control and treatment groups were taking the same class, on the same day – separated only by lunch break. Fortuitously, due to a large intake CP1 was split into two groups for the study semester, but was taught by the same lecturer in the exact same way – sometimes things just work out!

I will be at ITiCSE 2017 and SIGCSE 2018 (and 2019 for that matter – I am happy to be serving a two year term as workshop co-chair). I hope to attend some other conferences also but haven’t committed yet. I look forward to continuing the discussion on the saga of enhancing compiler error messages with anyone who cares to listen! In the meantime here are a few more posts where I discuss enhancing compiler error messages – comments are welcome…

[1] Raymond S. Pettit, John Homer, and Roger Gee. 2017. Do Enhanced Compiler Error Messages Help Students?: Results Inconclusive.. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education (SIGCSE ’17). ACM, New York, NY, USA, 465-470. DOI: https://doi.org/10.1145/3017680.3017768

[2] Paul Denny, Andrew Luxton-Reilly, and Dave Carpenter. 2014. Enhancing syntax error messages appears ineffectual. In Proceedings of the 2014 conference on Innovation & technology in computer science education (ITiCSE ’14). ACM, New York, NY, USA, 273-278. DOI: http://dx.doi.org/10.1145/2591708.2591748

[3] Brett A. Becker. 2016. An Effective Approach to Enhancing Compiler Error Messages. In Proceedings of the 47th ACM Technical Symposium on Computing Science Education (SIGCSE ’16). ACM, New York, NY, USA, 126-131. DOI: https://doi.org/10.1145/2839509.2844584

full-text available to all with link available at www.brettbecker.com/publications

[4] Brett A. Becker, Graham Glanville, Ricardo Iwashima, Claire McDonnell, Kyle Goslin, Catherine Mooney. 2106. Effective Compiler Error Message Enhancement for Novice Programming Students, Computer Science Education 26(2-3), pp. 148-175; http://dx.doi.org/10.1080/08993408.2016.1225464

full-text available to all at www.brettbecker.com/publications

You are what you measure: Enhancing compiler error messages effectively

semicolonCompiler Error Messages (CEMs) play a particularly essential role for programming students as they often have little experience to draw upon, leaving CEMs as their primary guidance on error correction. Further, they provide immediate feedback, with implications discussed in this post. In the absence of an instructor, the compiler and its messages are the only source of feedback on what the student is doing correctly, and incorrectly. There is another issue at hand however – CEMs are frequently inadequate, present a barrier to progress, and are often a source of discouragement.

At SIGCSE 2016 I presented a paper which showed that enhancing compiler error messages can be effective, referred to here as Becker (2016). I also led a more in-depth study with a more focused comparison approach that was recently published in Computer Science Education (see my publications page for details on both). In 2014 Denny, Luxton-Reilly and Carpenter published a study providing evidence that enhancing CEMs was not effective, generating a bit of discussion on Mark Guzdial’s Blog. Although these papers came up with opposing conclusions, there are a ton of variables involved in studies like this, and two things in particular are really important. These might sound really obvious, but bear with me. These two things are:

  1. What is measured
  2. How these things are measured

Another important factor is the language used – as in the English terminology – not programming language. That will come up here soon enough.

In Becker (2016) I measured four things:

  1. number of errors per compiler error message
  2. number of errors per student
  3. number of errors per student per compiler error message
  4. number of repeated errors per compiler error message

Denny et al. measured three things:

  1. number of consecutive non-compiling submissions
  2. total number of non-compiling submissions
  3. number of attempts needed to resolve three errors: Cannot resolve identifier, type mismatch, missing semicolon

Getting back to my fairly obvious point that what is measured (and how) is of critical importance, let me dig into my four metrics for some of the not so obvious stuff. For starters, all four of my metrics involve student errors. Additionally, although I was measuring errors, for three of my metrics I was measuring some flavor of errors per CEM. This is important, and the wording is intentional. As I was investigating the effect of enhancing CEMs, the ‘per CEM’ part is by design. However it is also required for another reason – there is often not a one-to-one mapping of student committed errors to CEMs in Java – so I don’t know (from looking at the CEM) exactly what error caused that CEM. I could look at the source code to see, but the point is that from a CEM point of view, all I can know is how many times that CEM occurred – in other words, how many (student-committed) errors (of any type/kind/etc.) generated that CEM. See work by Altadmri & Brown (2015) and my MA thesis for more on this lack of a one-to-one mapping of errors to CEMs in Java. This makes things tricky. Finally, each metric warrants some discussion on its own:

  1. The number of errors per CEM was measured for all errors encountered during the study (generating 74 CEMs in total) and for errors generating the top 15 CEMs, representing 86.3% of all errors. Results indicated that enhancing CEMs reduced both.
  2. The number of errors per student was not significantly reduced when taking all 74 CEMs, but it was for errors generating the top 15 CEMs.
  3. The number of errors per student per CEM was significantly reduced for 9 of the top 15 CEMs (of which only 8 had enhanced CEMs). The odd-one-out was .class expected. Sometime I’ll write more on this – it’s a really interesting case.
  4. The number of repeated errors per CEM is dependent on the definition of a repeated error. I defined a repeated error similarly to Matt Jadud – two successive compilations that generate the same CEM on the same line of code. Also, this was for the top 15 CEMs.

If we now look at the metrics of Denny et al., the first two involve student submissions, which may have contained errors, but errors are not being measured directly (well, we know that the compiling submissions don’t have any errors, and that the non-compiling submissions do, but that’s about it). Only the third involves errors directly, and at that, only three particular types. What was really measured here was the average number of compiles that it takes a student to resolve each type of error, where a submission is said to have a syntax error of a particular type when the error is first reported in response to compilation, and the error is said to have been resolved when the syntax error is no longer reported to students in the feedback for that submission.

So, comparing the results of these two studies, if this post were trying to reach a conclusion of its own, the best we can do is to compare the following result from Denny et al.:

  • D1. Enhancing compiler error messages does not reduce the number of attempts needed to resolve three errors (really, CEMs): Cannot resolve identifier, type mismatch, missing semicolon.

and the following from Becker (2016):

  • B1. Enhancing compiler error messages does reduce the number of errors that generate the CEMs: expected, incompatible types, ; expected, and many other CEMs.
  • B2. Enhancing compiler error messages does reduce the number of errors per student that generate the CEMs: expected, incompatible types, and many other CEMs*
  • B3. Enhancing compiler error messages does reduce the number of repeated errors generating the CEMs: expected, incompatible types, and many other CEMs.*

These are the only four results (across both papers) that measure the same thing – student errors. Further, we can only specifically compare the results involving the three CEMs that Denny et al. investigated. Becker (2016) investigated 74, including these three.

* The number of errors (per student, and repeated) generating the CEM ; expected was not reduced in these cases.

So, despite the differing general conclusions (Denny et al. indicate that enhanced CEMs are not effective, while Becker (2016) indicates that enhanced CEMs can be effective) if we synthesize the most common results from each paper, we end up with what the two studies agree on (sometimes), which is ; expected:

  • D1. Enhancing compiler error messages does not reduce the number of attempts needed to resolve missing semicolon (Denny et al.).
  • B2. Enhancing compiler error messages does not reduce the number of errors per student that generate the CEM ; expected (Becker 2016).
  • B3. Enhancing compiler error messages does not reduce the number of repeated errors per student that generate the CEM ; expected (Becker 2016).

I find this to be particularly unsurprising as ; expected is one of the most common CEMs (in my study the third most common, representing ~10% of all errors) and the actual CEM itself is one of the most straightforward of all Java CEMs. However, Becker (2016) had one result (B1) which showed that the number of errors generating ; expected CEMs was reduced. So for this CEM, maybe the jury is still out.

It may seem that the two studies didn’t agree on much, which technically is true. However I hope that any readers that have persevered this long can appreciate the nuances of what is measured (and how) in these types of study, particularly when comparing studies. It is very challenging because the nuances really matter. Further, they can really complicate the language used. If you try and make the language easy, you miss important details, and get ambiguous. Incorporating those details into the language affects readability.

Finally, I think that this post demonstrates the important need for studies that attempt to repeat the results of others, particularly in an area where results are contested. Comparing two different studies poses several other problems (apart from what is measured and how), and I won’t go into them here as most are well known and well discussed, but I do think that the difficulties that come about to the use of different language is an often overlooked one.

Either way, I believe that the results in Becker (2016), and the recent Computer Science Education article are robust. These studies provide many results do indicate that enhanced CEMs can be effective.

Learning from instantaneous feedback designed for experts, provided by a machine

I remember reading Matt Jadud‘s thesis and being struck by a paragraph on punched cards, repeated here with kind permission:

In 1967, the Univac 1107 system running at Case University (later to become Case Western Reserve University) in Cleveland, Ohio, boasted an impressive turnaround-time of 12 hours on most jobs. Lynch reports that “A user can submit a deck at 8:00 a.m., have it on the input tape by noon, his job completed by 12:35, and the output returned to him by 5:00 p.m. in the evening.”[Lyn67] Compared to students today (who work in laboratories equipped with many machines, each thousands of times more powerful than the Univac 1107), the 9-hour turnaround time seems to approach infinity; but Lynch goes on to say that “It should be noted that only 10-15 percent of the runs are the first of the day, 85-90 percent are repeats, and about a third of the runs have a circulation time of less than 5 minutes. It is often possible to debug a moderate size program in less than an hour by gaining frequent access to the computer.”

This early report on programmer behaviour seems to imply that having access to rapid feedback from a computer regarding the syntax and semantics of a program was a valued interaction style. Not only is this behaviour exhibited by students today, but tools like Eclipse continuously re-compile the programmer’s code, highlighting errors as the programmer develops their code in real-time; this is the “rapid compilation cycle” taken to its natural limit.

It wasn’t that the above contains anything I didn’t pretty much know, but I had never thought about it in quite that way. Imagine having to wait hours or days to find out if a program compiled successfully! Perhaps some readers can remember those days. Anyway, this got me to thinking: what if compilers didn’t return error messages (or an indication that the compilation was successful) immediately? How would programming, and teaching programming, be different? This quickly led to thinking about how learning, and teaching, programming is different to other disciplines – and what that means.

In 1986, Perkins, et al. noted that under normal instructional circumstances some students learn programming much better than others. Investigations of novice programmer behavior suggest that this happens in part because different students bring different patterns of learning to the programming context. Students often fall into varying combinations of disengaging from the task whenever trouble occurs, neglecting to track closely what their programs do by reading back the code as they write it, trying to repair buggy programs by haphazardly tinkering with the code, or having difficulty breaking problems down into parts suitable for separate chunks of code. The authors categorized programming students into groups: stoppers, who quickly gave up when faced with errors; movers, who would work their way through, around or away from errors; and tinkerers, who poke, tweak, and otherwise manipulate their code in a variety of small ways, sometimes making progress towards a working program, sometimes not. Either way, the fact that the compiler provides instant feedback is what allows these behaviors to exist.

Today’s students must write code differently than their counterparts decades ago due to this instant feedback. the behaviors described by Perkins et al. wouldn’t be possible (or would be very different) if students had to wait hours or days for compiler feedback. I have witnessed students use the compiler to tell them where their errors are. I have done the same myself. Why bother pouring over your code with spectacularly aching scrutiny when the compiler will instantly, and with little concrete penalty, tell you what you did wrong (syntactically)? All you have to do is click this little button! Note that when I say instantaneous feedback, I am not addressing Eclipse-like red squiggly line behavior. That is of course ‘more instantaneous’ than having to click a button. But traditional push-the-button feedback is more or less instantaneous compared to the way it used to happen. As for the red squiggly lines, watch this space, there will be something to come there soon. Also, in this post I am not addressing the point that most compiler feedback is poor. See this post for an example.

Is there something lost in the present scenario? Was there value to thinking like a compiler, going over your code to ensure that when the real compiler does, there are no errors? Is there something that students are not learning, not experiencing, or not taking away from their  programming practice in not doing so?

I will leave that question where it is, lest we disappear down the rabbit hole. It is however worthy of future thought.

Before this post ends however, what about the feedback that compilers provide? Feedback is of course regarded as one of the most important influences on learning and motivation, noted by Watson et al. in the context of programming. Unique to computer programming is the fact that much of the feedback that programming students receive comes:

Edit Jan 30 2017: Also see this post for a discussion and an example of what can be done about this.

Clearly, programming students are in a different world when they are in CS1, compared to students in Phys1, Med1, or many (all?) other 101/introductory courses where feedback comes:

  • from a human
  • often delayed
  • designed for them, and (hopefully) in terms that they can fully understand

What does this mean for those that are learning to program? They get instant, difficult to understand feedback from a machine! For one thing, as they are in a different world to so many other students, their expectations, their outcomes, and the way that they are taught should probably be different too.

I’ll avoid the second rabbit hole by leaving that thought there… for now.

Becker, B.A. (2016) An Effective Approach to Enhancing Compiler Error Messages. In Proceedings of the 47th ACM Technical Symposium on Computer Science Education (SIGCSE 2016), (pp. 126-131). ACM

Jadud, M. C. (2006). An exploration of novice compilation behaviour in BlueJ. PhD Thesis. University of Kent

Perkins, D. N., Hancock, C., Hobbs, R., Martin, F., & Simmons, R. (1986). Conditions of learning in novice programmers. Journal of Educational Computing Research, 2(1), 37-55

Traver, V. J. (2010). On compiler error messages: what they say and what they mean. Advances in Human-Computer Interaction, 2010

Watson, C., Li, F. W., & Godwin, J. L. (2012). BlueFix: using crowd-sourced feedback to support programming students in error diagnosis and repair. In International Conference on Web-Based Learning (pp. 228-239). Springer Berlin Heidelberg

Misleading, cascading Java error messages

I have been working with enhancing Java error messages for a while now, and I have stared at a lot of them. Today I came across one that I don’t think I’ve consciously seen before, and it’s quite a doozy if you are a novice programmer. Below is the code, with a missing bracket on line 2:

public class Hello {
       public static void main(String[] args)  //missing {
              double i;
              i = 1.0;
              System.out.println(i);
       }
}

The standard Java output in this case is:

C:\Users\bbecker\Desktop\Junk\Hello.java:2: error: ';' expected
       public static void main(String[] args)
                                             ^

C:\Users\bbecker\Desktop\Junk\Hello.java:4: error: <identifier> expected
              i = 1.0;
               ^

C:\Users\bbecker\Desktop\Junk\Hello.java:5: error: <identifier> expected
              System.out.println(i);
                                ^

C:\Users\bbecker\Desktop\Junk\Hello.java:5: error: <identifier> expected
              System.out.println(i);
                                  ^

C:\Users\bbecker\Desktop\Junk\Hello.java:7: error: class, interface, or enum expected
}
^

5 errors

Process Terminated ... there were problems.

Amazing. This is telling the student that there were 5 errors (not one), and none of the five reported errors are even close to telling the student that there is a missing bracket on line 2. If the missing bracket is supplied, all five “errors” are resolved.

During my MA in Higher Education I developed an editor that enhances some Java error messages, and I have recently published some of this work at SIGCSE (see brettbecker.com/publications). I hope to get some more work on this front  soon, and in addition I would like to look more deeply at what effects cascading error messages have on novices. I can imagine that if I had no programming experience, was learning Java, and came across the above I would probably be pretty discouraged.

The enhanced error that my editor would provide for the above code, which would be reported side-by-side with the above Java error output is:

Looks like a problem on line number 2.

Class Hello has 1 fewer opening brackets '{' than closing brackets '}'.

A killer novice bug

Recently I had ‘one of those moments’ in my CP1 lab. I am a little embarrassed to say that I spent the last few minutes of this session staring at a C function written by a student that wasn’t behaving the way it should. The students were asked to write a function that returns max value stored in a stack. Below is a reconstruction of the student’s code, slightly simplified:

int max(Stack *sptr) {
    int max = INT_MIN;
    Node popped;
    while (!isEmpty(sptr)){
         popped = pop(sptr);
         if(popped->data > max);{
             max = popped->data;
         }
    }
    return max;
}

The symptom of the bug was that the value of max returned was always the value of the last item popped from the stack.

A few labs later I had a similar situation. This time I spotted the bug right away (and embarrassingly felt slightly proud).

I am going to hope that at this point there are two types of people reading this:

  1. those that see the bug immediately
  2. those that don’t

For selfish reasons I hope that there are at least a few 2’s. At the time I had just written my own solution to this question, and had helped a dozen other students with their versions of their solutions. Needless to say that running up to the 90 minute mark I was suffering from code blindness. At least that’s my excuse for not seeing the error immediately.

So, for all of you 2’s out there, if there are any, the bug is the empty statement after the if condition. I have to say I sensed a mixture of relief and disappointment from the student when I pointed out the bug and her output was now as expected. The relief was coming from obvious places, but the disappointment seemed to stem from a sense of ‘Really? That’s it? You have got to be kidding me.’. I have to say I did feel for her.

A few days later in lecture I wrote the above code on the board (in China we are still using good-old-fashioned chalk and chalkboards), and asked the 100+ students to point out the problem. It took a good two minutes for the first student to do so.

gcc (4.9.3, no options) gives no warning for empty statements. I just checked Java SE 8 update 92, which gives an empty statement after if warning, which despite my unhappiness with most have compiler error messages, is quite nice.

I think next year I’ll spend a little time discussing the empty statement and NOPs, as early as possible, and see if that reduces the troubles experienced this year.

A great resource for non-native English speakers studying computing

I have been teaching this semester in Beijing. The language of instruction is English but most of my students are not fluent – improving English is part of the program here. Two of my modules are CS1 and Computer Organization. Early on in this semester in both courses I encouraged students to look up a few terms in the Free Online Dictionary of Computing (foldoc.org). Little did I know then that I would end up referring to FOLDOC almost every week.

Started in 1985 by Denis Howe, FOLDOC is an online, searchable, encyclopedic dictionary, currently containing nearly 15,000 definitions. It also includes cross-references and pointers to related resources elsewhere on the Internet, as well as bibliographical references to paper publications.

What I really like about FOLDOC is its simplicity, and that the definitions are pointedly context-based, specifically describing what words mean in the context of computing. I never really thought about it until recently, but in computing we use many words in ways that can be quite far from their ‘normal’ meanings. Take for instance the word load. Computing people happily abuse this word using it often and with several meanings. The Merriam Webster Dictionary has these ‘simple definitions’ for load:

1. something that is lifted and carried

2. an amount that can be carried at one time : an amount that fills something (such as a truck)

3. the weight that is carried or supported by something

None of the other ‘full definitions’ mention anything like those that FOLDOC gives:

load

1. To copy data (often program code to be run) into memory, possibly parsing it somehow in the process. E.g. “WordPerfect can’t load this RTF file – are you sure it didn’t get corrupted in the download?” Opposite of save.

2. The degree to which a computer, network, or other resource is used, sometimes expressed as a percentage of the maximum available. E.g. “What kind of CPU load does that program give?”, “The network’s constantly running at 100% load”. Sometimes used, by extension, to mean “to increase the level of use of a resource”. E.g. “Loading a spreadsheet really loads the CPU”. See also: load balancing.

3. To install a piece of software onto a system. E.g. “The computer guy is gonna come load Excel on my laptop for me”. This usage is widely considered to be incorrect.

FOLDOC is pretty comprehensive too. Writing this post I hit ‘random’ on the site, and it brought me to the definition of CACM:

Communications of the ACM

(publication) A monthly publication by the Association for Computing Machinery sent to all members. CACM is an influential publication that keeps computer science professionals up to date on developments. Each issue includes articles, case studies, practitioner oriented pieces, regular columns, commentary, departments, the ACM Forum, technical correspondence and advertisements.

http://acm.org/cacm/.

Then I googled CACM. The CACM we know and love is the 5th hit, and unless you know what ACM stands for,  the first page of results isn’t much help if you are looking to find what CACM means or stands for (in a computing context). I wish that someone gave me such a brief synopsis of CACM when I was starting out.

Other good entries for ‘normal’ English words whose computing definitions are not easily found on the net are iteration and volatile:

iteration

(programming)   Repetition of a sequence of instructions. A fundamental part of many algorithms. Iteration is characterised by a set of initial conditions, an iterative step and a termination condition.

A well known example of iteration in mathematics is Newton-Raphson iteration. Iteration in programs is expressed using a loop, e.g. in C:

	new_x = n/2;
	do
	{
	  x = new_x;
	  new_x = 0.5 * (x + n/x);
	} while (abs(new_x-x) > epsilon);

Iteration can be expressed in functional languages using recursion:

	solve x n = if abs(new_x-x) > epsilon
		    then solve new_x n
		    else new_x
		    where new_x = 0.5 * (x + n/x)
        solve n/2 n

volatile

1.   (programming)   volatile variable.

2.   (storage)   See non-volatile storage.

A few more clicks on random brought me to this, proof that those behind FOLDOC also have a great sense of humor:

elephant

Large, grey, four-legged mammal.

 

Update August 3 2016 – Merriam Webster have a learner’s dictionary which could be a valuable resource for those learning English.