Computer experience history Jim Groeneveld

 

Since about the early 70-ies I have been using computers, initially a mainframe computer (CDC Cyber with OS SCOPE, later NOS and NOS/VE) with (as far as I remember) a memory of 100,000 words (1 word = 10 bytes of 6 bits each). It was driven by Hollerith punch cards (80 columns), punched on very noisy punch machines. From there we punched both our programs and our data, which we delivered at the appropriate desk. After an inevitable ‑coffee‑ break of at least 15 minutes, often more, we could soon fetch the output from a shelf, study the errors and rerun the whole process, this time only repunching the buggy cards. After several iterations we were mostly satisfied enough with the result. I still can hear those punching machines in my mind.

 

Some time later we applied TTY's (printing terminals, 110 bps) instead of punch cards, or even real electronic terminals (300 and 1200 bps) via direct or modem (telephone) lines. That way we could remain at our working place, quite a distance away, enter our data, compose our program, run the whole combination, preview the results on the screen, and having it printed at the computer centre. We received the printed output (with line numbers) via regular mail the next morning to study and revise. Each day an iteration like that took place, trying to do as much as surveyable from the output, the changes written in it with red ink, and the scrolling line oriented editor. This really already was a luxioury.

 

At that time we used a locally (at the State University of Groningen) written statistical package, called WESP, in Dutch: "Waarlijk Eenvoudig Statistisch Pakket", in English: "Really Simple Statistical Package". I don't remember much of it anymore, but variables didn't have names, but just sequence numbers. Data editing was limited, and so was the amount of statistics, though at that time it was quite worthwhile and offering almost every we needed for our projects.

 

During the late 70-ies we also started using so called table calculation machines, programmable calculators actually of the size of a typing machine, from Diehl. They could be programmed in their own, very specific machine language. I remember the Combitronic with 100 programming steps and 10 numerical registers, for which I a.o. programmed the binomial distribution outcomes for given numbers and proportions.A quite sophisticated one was the Alphatronic, having a memory of 1600 programming steps, equivalent to its 160 numerical registers, which were mostly divided into 800 steps with 80 registers. I programmed a 2-way analysis-of variance program on it, also to be used as 1-way with repeated measures. The allowed cell dimensions were 10 by infinite (really infinite, as running calculations took place) and supported designs were both equal and unequal cell frequencies with several statistical variations (after Winer) and homogenity-of-variance-tests (Hartley and Cochran, I believe) as a bonus. I still must have the cassette with the software on it somewhere.

 

Gradually I made acquaintance with SPSS-6000 (vs. 6 up to 9) from Vogelback Computer Center. That was quite an improvement in relation to WESP. And the output also was much more fancy. In order to facilitate data entry I wrote my own complete database package allowing user-friendly input of numerical data, using SPSS variable names and labels. The package, DAPHNE (after our first child and an abbreviation for Data Acquisition Programs Handling Numerical Entries), was written in Fortan vs. 4. for Cyber Computers (yes, I am a *Real*Fortran*Programmer*, using features like unchecked subroutine calls outside allowed boundaries in order to peek inside memory).

 

Since computers could print texts I have been applying text processing from its most elementary shape. At a certain stage we had access to a real text processing package on a mainframe. It, however, could not automatically hyphenate words. So, in the early 80-ies, I wrote my own fully automatic (Dutch) hyphenation software (KAPAF) in Fortran, which was based on fuzzy logic, hyphenating probabilities between the characters of a pair of letters. It functioned surprisingly well as a preprocessor with the text processing software, and has been used for many years while producing our study reports.

 

SPSS changed its syntax drastically around the middle of the 80-ies, with new SPSS-X versions for mainframes (which also ran on a VAX/VMS, that I used much later) and SPSS-PC for the IBM-PC. Around 1991 years ago I wrote an almost fully automatic syntax converter in BASIC (SPSS9TOX) for the IBM-PC to use with old ascii data files, which could not be reread and managed differently (i.e. converted to the new SPSS system file format). Since 1985 I have been using PC's or compatibles (running DOS) and from around 1990 these remained the only computers to use. Our graphical applications were either plots from SPSS or consisted of very nice graphical screens and prints from Statgraphics (vs. 2.x). All the time I have been busy with data conversion quite a bit (including the necessary -serial- communication) and wrote a.o. the program DATAFIX (initially in BASIC, later in rewritten C) in particular to manage very long records (line lengths in ascii data files).

 

During the early days (first ten years) of the IBM-PC and its successors much smaller or larger freeware statistical software was developed in the world. At the end of the 80-ies I had them all in control and presented a comprehensive review on them. Later statistical software developments were fast and my time was limited, so I lost control.

 

During the early 90-ies I was involved in a (medical informatics) project developing client-server software (written in C) with a very user friendly GUI for the HP-Unix platform, driving all kinds of Unix or DOS programs in the background. My job was to build the statistical client and server, i.e. to build a GUI, interactively presenting all kinds of statistical techniques to be run from the server. The server part consisted of a computer language generator, adaptable to any statistical package with an appropriate configuration file, e.g. EpiInfo and SPSS.  Currently (2014) I am gradually switching from WXP to Ubuntu Linux.

 

Only from 1997 I started to learn and use SAS (vs. 6.12) for Windoesn't. I have the impression, that SPSS is slower in comparison with SAS, while performing similar tasks. This may be due to the fact that SAS compiles its DATA and PROC steps before running, while SPSS has always remained an interpreter (as far as I know). Since 1999 I am active on the internet discussion list SAS-L as a SAS expert and have been nominated as Rookie-of-the-year 1999. In the meantime I passed my SAS Base, Advanced and Clinical Trials Certified Professional V9 exams.

 

Well, actually I have been studying educational psychology (and graduated in that), but specialized in social scientific research, its methodology, statistics and computer programming and I have been employed in research and data processing all the time.

 

Last update: 6 August 2014