WANTED: Volunteer to Scan Old Programs - CP/M

This is a discussion on WANTED: Volunteer to Scan Old Programs - CP/M ; Is anybody out there who would volunteer to scan A LOT (208...) OF old BASIC programs that were once a National (meaning: American) official test suite? In the next message, I will publish what I have retyped (the documentation). However, ...

+ Reply to Thread
Page 1 of 2 1 2 LastLast
Results 1 to 20 of 30

Thread: WANTED: Volunteer to Scan Old Programs

  1. WANTED: Volunteer to Scan Old Programs

    Is anybody out there
    who would volunteer to scan A LOT (208...) OF old BASIC programs that
    were once a National (meaning: American) official test suite?

    In the next message, I will publish what I have retyped (the
    documentation).

    However, there is another "volume" (one is, right now, for sale in
    England: "NBS Minimal BASIC Test Programs: Version 2, user's manual")
    and this beastie is about 440 pages big, containing the source code of
    208 BASIC programs that were used to test the conformance of any
    "Minimal BASIC" offered by a private company to by bought by an
    official US administration.

    I have been programming programs for years (25+, now) and I know all
    too well that 99% are suffereing from a lack of proofing.

    Now, the problem is: How do you prove that a big program (like the 12K
    interpreter that I am currently working on) is working properly,
    without any hidden bug?

    This is precisely what those "NBS Minimal BASIC Test Programs" are
    doing.

    I originally found Volume 2 by luck, and recently managed to find a
    photocopy of Volume 1.

    However, I am so busy that I feel helpless by the number of pages to
    retype...

    (However, if, as usual, nobody else do anything, I will do it, one by
    one.)

    Read the next message for an explanation of this "test suite".

    (Since this is a rarity, I think that this file will be quickly saved
    by some...)

    Yours Sincerely,
    Mr Emmanuel Roche


  2. Re: WANTED: Volunteer to Scan Old Programs

    NBS.WS4
    -------

    Computer Science and Technology -- NBS Special Publication 500-70/1

    - "NBS Minimal BASIC Test Programs -- Version 2, User's Manual"
    "Volume 1 -- Documentation"

    (Retyped by Emmanuel ROCHE.)

    John V. Cugini
    Joan S. Bowden
    Mark W. Skall

    Center for Programming Science and Technology
    Institute for Computer Sciences and Technology
    National Bureau of Standards
    Washington, DC 20234

    Issued November 1980

    79 pages Price: $4.00


    Abstract
    --------

    This publication describes the set of programs developed by NBS
    for the
    purpose of testing conformance of implementations of the computer
    language
    BASIC to the American National Standard for Minimal BASIC, ANSI
    X3.60-1978.
    The Department of Commerce has adopted this ANSI standard as
    Federal
    Information Processing Standard 68. By submitting the programs to a
    candidate
    implementation, the user can test the various features which an
    implementation
    must support in order to conform to the standard. While some
    programs can
    determine whether or not a given feature is correctly
    implemented, others
    produce output which the user must then interpret to some degree.
    This manual
    describes how the programs should be used, so as to interpret
    correctly the
    results of the tests. Such interpretation depends strongly on
    a solid
    understanding of the conformance rules laid down in the standard, and
    there is
    a brief discussion of these rules and how they relate to the test
    programs,
    and to the various ways in which the language may be implemented.

    Key words: BASIC; language processor testing; Minimal BASIC;
    programming
    language standards; software standards; software testing


    Acknowledgments
    ---------------

    Version 2 owes its existence to the efforts and example of many
    people. Dr.
    David Gilsinn and Mr. Charles Sheppard, the authors of version 1
    (issued as an
    NBS Internal Report; no longer available), deserve credit for
    construction of
    that first system, of which version 2 is a refinement. In addition,
    they were
    generous in their advice on many of the pitfalls to avoid on
    the second
    iteration. Mr. Landon Dyer assisted with the testing and document
    preparation.
    It is also important to thank the many people who sent in
    comments and
    suggestions on Version 1. We hope that all the users of the resulting
    Version
    2 will help us improve it further.


    Table of Contents
    -----------------

    1 How to use this manual

    2 The language standard for BASIC
    2.1 History and prospects
    2.2 The Minimal BASIC language
    2.3 Conformance to the standard
    2.3.1 Program conformance
    2.3.2 Implementation conformance

    3 Determining implementation conformance
    3.1 Test programs as test data, not algorithms
    3.2 Special issues raised by the standard requirements
    3.2.1 Implementation-defined features
    3.2.2 Error and exception reporting

    4 Structure of the test system
    4.1 Testing features before using them
    4.2 Hierarchical organization of the tests
    4.3 Environment assumptions
    4.4 Operating and interpreting the tests
    4.4.1 User checking vs. self-checking
    4.4.2 Types of tests
    4.4.2.1 Standard tests
    4.4.2.2 Exception tests
    4.4.2.3 Error tests
    4.4.2.4 Informative tests
    4.4.3 Documentation

    5 Functional groups of test programs
    5.1 Simple PRINTing of string constants
    5.2 END and STOP
    5.3 PRINTing and simple assignment (LET)
    5.3.1 String variables and TAB
    5.3.2 Numeric constants and variables
    5.4 Control statements and REM
    5.5 Variables
    5.6 Numeric constants, variables, and operations
    5.6.1 Standard capabilities
    5.6.2 Exceptions
    5.6.3 Errors
    5.6.4 Accuracy tests -- informative
    5.7 FOR-NEXT
    5.8 Arrays
    5.8.1 Standard capabilities
    5.8.2 Exceptions
    5.8.3 Errors
    5.9 Control statements
    5.9.1 GOSUB and RETURN
    5.9.2 ON-GOTO
    5.10 READ, DATA, and RESTORE
    5.10.1 Standard capabilities
    5.10.2 Exceptions
    5.10.3 Errors
    5.11 INPUT
    5.11.1 Standard capabilities
    5.11.2 Exceptions
    5.11.3 Errors
    5.12 Implementation-supplied functions
    5.12.1 Precise functions: ABS, INT, SGN
    5.12.2 Approximated functions: SQR, ATN, COS, EXP, LOG, SIN, TAN
    5.12.3 RND and RANDOMIZE
    5.12.3.1 Standard capabilities
    5.12.3.2 Informative tests
    5.12.4 Errors
    5.13 User-defined functions
    5.13.1 Standard capabilities
    5.13.2 Errors
    5.14 Numeric expressions
    5.14.1 Standard capabilities in context of LET-statement
    5.14.2 Expressions in other contexts: PRINT, IF, ON-GOTO, FOR
    5.14.3 Exceptions in subscripts and arguments
    5.14.4 Exceptions in other contexts: PRINT, IF, ON-GOTO, FOR
    5.15 Miscellaneous checks
    5.15.1 Missing keyword
    5.15.2 Spaces
    5.15.3 Quotes
    5.15.4 Line numbers
    5.15.5 Line longer than 72 characters
    5.15.6 Margin overflow for output line
    5.15.7 Lowercase characters
    5.15.8 Ordering strings
    5.15.9 Mismatch of types in assignment

    6 Tables of summary information about the Test Programs
    6.1 Group structure of the Minimal BASIC Test Programs
    6.2 Test program sequence
    6.3 Cross-reference between ANSI Standard and Test Programs

    Appendix A: Differences between Versions 1 and 2 of the Minimal
    BASIC Test
    Programs

    References


    Figures
    -------

    Figure 1. Error and exception handling
    Figure 2. Format of test program output
    Figure 3. Instructions for the INPUT exceptions test


    Section 1: How to use this manual
    ---------------------------------

    This manual presents background information and operating instructions
    for the
    NBS Minimal BASIC Test Programs. Readers who want a general idea of
    what the
    programs are supposed to do, and why they are structured as they
    are, should
    read Sections 2 and 3. These sections give a brief explanation of
    BASIC, how
    it is standardized, and how the test programs help measure conformance
    to the
    standard. Those who wish to know how to interpret the results of
    program
    execution should also read Section 3, and then Section 4 for the
    general rules
    of interpretation, and Section 5 for information peculiar to
    individual
    programs and groups of programs within the test system. Section 6
    contains
    tables of summary information about the tests.

    Volume 2 of this publication consists of the source listings
    and sample
    outputs for all the Test Programs.

    The test system for BASIC should be helpful to anyone with an
    interest in
    measuring the conformance of an implementation of BASIC (e.g., a
    compiler or
    interpreter) to the Minimal BASIC standard. This would include 1)
    purchasers
    who want to be sure they are using a standard implementation, 2)
    programmers
    who must use a given implementation and want to know in which
    areas it
    conforms to the standard, and which features to avoid or be wary of,
    and 3)
    implementors who may wish to use the tests as a development and
    debugging
    tool.

    Much of this manual is derived from the technical specifications
    in the
    American National Standard for Minimal BASIC, ANSI X3.60-1978 [Ref.
    1]. You
    will need a copy of that standard in order to understand most of the
    material
    herein. Copies are available from the American National Standards
    Institute,
    1430 Broadway, New York, NY 10018. This document will frequently
    cite ANSI
    X3.60-1978, and references to "the standard" should be taken to mean
    that ANSI
    publication.

    The measure of success for Version 2 of the Minimal BASIC Test
    Programs is its
    usefulness to you. We, at NBS, would greatly appreciate hearing
    about your
    evaluation of the test system. We will respond to requests for
    clarification
    concerning the system and its relation to the standard. Also, we will
    maintain
    a mailing list of users who request to be notified of changes
    and major
    clarifications. Please direct all comments, questions, and suggestions
    to:

    Project Manager
    NBS Minimal BASIC Test Programs
    National Bureau of Standards
    Technology Bldg., Room A-265
    Washington, DC 20234


    Section 2: The language standard for BASIC
    ------------------------------------------

    2.1 History and prospects
    -------------------------

    BASIC is a computer programming language developed in the
    mid-1960's by
    Professors John G. Kemeny and Thomas E. Kurtz at Dartmouth
    College. The
    primary motivation behind its design was educational (in contrast
    to the
    design goals for, e.g., COBOL and FORTRAN) and, accordingly, the
    language has
    always emphasized ease of use and understanding, as opposed to simple
    machine
    efficiency. In July 1973, NBS published a "Candidate Standard for
    Fundamental
    BASIC" [Ref.2] by Prof. John A.N. Lee of the University of
    Massachusetts at
    Amherst. This work represented the beginning of a serious
    effort to
    standardize BASIC. The first meeting of the American National
    Standards
    Technical Committee on the Programming Language BASIC, X3J2, convened
    at CBEMA
    headquarters in Washington DC, on January 23-24, 1974, with Professor
    Kurtz as
    chairman. The committee adopted a program of work which envisioned
    development
    of a nucleus language, followed by modularized enhancements. The
    nucleus
    finally emerged as Minimal BASIC, which was approved as an ANSI
    standard
    January 17, 1978. As its name implies, the language defined in the
    standard is
    one which any implementation of BASIC should encompass.

    Meanwhile, NBS had been developing a set of Test Programs, the
    purpose of
    which was to exercise all the facilities defined in the standard, and
    thereby
    test conformance of implementations to the standard. This test
    suite was
    released as "NBSIR 78-1420-1, 2, 3, and 4, NBS Minimal BASIC Test
    Programs --
    Version 1, User's Manual" in January 1978. NBS distributed this
    version to
    more than 60 users, many of whom made suggestions about how the
    test suite
    might be improved. NBS has endeavored to incorporate these suggestions
    and re-
    design the tests where it seemed useful to do so. The result is the
    current
    Version 2 of the test suite. Appendix A contains a summary of the
    differences
    between versions 1 and 2.

    In order to provide a larger selection of high-level programming
    languages for
    the Federal government's ADP activities, the Department of
    Commerce has
    incorporated the ANSI standard as Federal Information Processing
    Standard 68.
    This means, among other things, that implementations of BASIC sold
    to the
    Federal government, after an 18-month transition period, must conform
    to the
    technical specifications of the ANSI standard: hence the NBS
    interest in
    developing a tool for measuring such conformance.

    ANSI X3J2 is currently (April 1980) working on a language standard for
    a much
    richer version of BASIC, which will provide such features as real-time
    process
    control, graphics, string manipulation, file handling, exception
    handling, and
    array manipulation. The current expectation is for ANSI adoption
    of this
    standard sometime in 1982. It is probable that such a standard for
    a full
    version of BASIC would be adopted as a Federal Information
    Processing
    Standard.


    2.2 The Minimal BASIC language
    ------------------------------

    Minimal BASIC is distinguished, among standardized computer languages,
    by its
    simplicity and its suitability for the casual user. It is simple,
    not only
    because of its historic purpose as a computer language for the
    casual user,
    but also because the ANSI BASIC committee organized its work
    around the
    concept of first defining a core or nucleus language which
    one might
    reasonably expect any implementation of BASIC to include, to be
    followed by a
    standard for enhanced versions of the language. Therefore, the
    tendency was to
    defer standardization of all the sophisticated features, and include
    only the
    simple features. In particular, Minimal BASIC has no facilities
    for file
    handling, string manipulation, or array manipulation, and has only
    rudimentary
    control structures. Although BASIC was originally designed for
    interactive
    use, the standard does not restrict implementations to that use.

    Minimal BASIC provides for only two types of data, numeric
    (with the
    properties usually associated with real or floating-point numbers) and
    string.
    String data can be read as input, printed as output, and moved and
    compared
    internally. The only legal comparisons, however, are equal or not
    equal; no
    collating sequence among characters is defined. Numeric data
    can be
    manipulated with the usual choice of operations: addition,
    subtraction,
    multiplication, division, and involution (sometimes called
    exponentiation).
    There is a modest assortment of numeric functions. One- and two-
    dimensional
    arrays are allowed, but only for numeric data, not string.

    For control, there is a GOTO, an IF which can cause control to jump
    to any
    line in the program, a GOSUB and RETURN for internal subroutines, a
    FOR and
    NEXT statement to execute loops while incrementing a control-
    variable, and a
    STOP statement.

    Input and output are accomplished with the INPUT and PRINT statements,
    both of
    which are designed for use on an interactive terminal. There is also a
    feature
    which has no real equivalent among the popular computer languages: a
    kind of
    internal file of data values, numeric and string, which can be
    assigned to
    variables with a READ statement (not to be confused with INPUT, which
    handles
    external data). The internal set of values is created with DATA
    statements,
    and may be read repetitively from the beginning of the set, by
    using the
    RESTORE statement.

    The programmer may (but need not) declare the size of arrays with
    the DIM
    statement, and specify that subscripts begin at 0 or 1 with
    an OPTION
    statement. The programmer may also establish numeric user-defined
    functions
    with the DEF statement, but only as simple expressions, taking one
    argument.
    The RANDOMIZE statement works in conjunction with the RND function. If
    RND is
    called without execution of RANDOMIZE, it always returns the same
    sequence of
    pseudo-random numbers for each execution of the program. Executing
    RANDOMIZE
    causes RND to return an unpredictable set of values each time.

    The REM statement allows the programmer to insert comments or
    remarks
    throughout the program.

    Although the facilities of the language are modest, there is one area
    in which
    the standard sets rather stringent requirements, namely diagnostic
    messages.
    The mandate of the standard, that implementations exhibit reasonable
    behavior
    even when presented with unreasonable programs, follows directly
    from the
    design goal of solicitude towards the beginning or casual user.
    Thus, the
    standard takes care to define what happens if the user commits any of
    a number
    of syntactic or semantic blunders. The need to test these
    diagnostic
    requirements strongly affected the overall shape of the test system,
    as will
    become apparent in later sections.


    2.3 Conformance to the standard
    -------------------------------

    There are many reasons for establishing a standard for a programming
    language:
    the promotion of well-defined and well-designed languages as a
    consequence of
    the standardizing process itself, the ability to create language-
    based, rather
    than machine-based, software tools and techniques, the increase in
    programmer
    productivity which an industry-wide standard fosters, and so on. At
    bottom,
    however, there is one result essential to the success of a standard:
    program
    portability. The same program should not evoke perniciously different
    behavior
    in different implementations. Ideally, the same source code
    and data
    environment should produce the same output, regardless of the
    machine
    environment.

    How does conformance to the standard work towards this goal?
    Essentially, the
    standard defines the set of syntactically legal programs, assigns a
    semantic
    meaning to all of them, and then requires that implementations
    (sometimes
    called "processors"; we will use the two terms interchangeably
    throughout this
    document) actually produce the assigned meaning when presented with
    a legal
    program.


    2.3.1 Program conformance
    -------------------------

    Program conformance, then, is a matter of syntax. We look at the
    source code
    and determine whether or not it obeys all the rules laid down in the
    standard.
    These rules are mostly spelled out in the syntax sections of the
    standard,
    using a variant of Backus-Naur Form (BNF). They are supplemented,
    however, by
    certain context-sensitive constraints, which are contained in the
    semantics
    sections. Thus, the rules form a ceiling for conforming programs. If a
    program
    has been constructed according to the rules, it meets the standard,
    WITHOUT
    REGARD TO ITS SEMANTIC MEANING. The syntactic rules are fully
    implentation-
    independent: a standard program must be accepted by ALL standard
    processors.
    Further, since we can tell if a program is standard by mere
    inspection, it
    would be a reasonably easy job to build a recognizer or syntax
    checker which
    could always discover whether or not a program is standard.
    Unfortunately,
    such a recognizer could not be written in Minimal BASIC itself,
    and this
    probably explains why no recognizer to check program conformance
    has gained
    wide acceptance. At least one such recognizer does exist,
    however. Called
    PBASIC [Ref.3], it was developed at the University of Kent at
    Canterbury, and
    is written in PFORT, a portable subset of FORTRAN. PBASIC was used
    to check
    the syntax of the Version 2 Minimal BASIC Test Programs.


    2.3.2 Implementation conformance
    --------------------------------

    Implementation conformance is derivative of the more primitive
    concept of
    program conformance. In contrast to the way in which program
    conformance is
    described, processor conformance is specified functionally, not
    structurally.
    The essential requirement is that an implementation accept any
    standard
    program, and produce the behavior specified by the language standard.
    That is,
    the implementation must make the proper connection between the syntax
    of the
    program and the operation of the computer system. Note that this is
    a black
    box description of the implementation. Whether the realization of the
    language
    is done with a compiler, an interpreter, firmware, or by being hard-
    wired, is
    irrelevant. Only the external behavior is important, not the
    internal
    structure -- quite the opposite of the way program conformance is
    determined.

    The difference in the way conformance is defined for programs and
    processors
    radically affects the test methodology by which we determine
    whether the
    standard is met. The relevant point is that there currently is no
    way to be
    certain that an implementation does conform to the standard, although
    we can
    sometimes be certain that it does not. In short, there is no
    algorithm, such
    as the recognizer that exists for programs, by which we
    can answer
    definitively the question: "Is this a standard processor?"

    Furthermore, the standard acts as a floor for processors, rather
    than a
    ceiling. That is, an implementation must accept and process AT
    LEAST all
    standard programs, but may also implement enhancements to the
    language, and
    thus accept non-standard programs as well. Another difference between
    program
    and processor conformance is that the description of processor
    conformance
    allows for some implementation-dependence, even in the treatment of
    standard
    programs. Thus, for some standard programs, there is no unique
    semantic
    meaning, but rather a set of meanings, usually similar,
    among which
    implementations can choose.


    Section 3: Determining implementation conformance
    -------------------------------------------------

    3.1 Test programs as test data, not algorithms
    ----------------------------------------------

    The Test Programs do not embody some definitive algorithm by
    which the
    question of processor conformance can be answered yes or no.
    There is an
    important sense in which it is only accidental that they are programs
    at all;
    indeed, some of them, syntactically, are not. Rather, their primary
    function
    is as test data. It is readily apparent, for instance, that the
    majority of
    BASIC test programs are algorithmically trivial; some consist only of
    a series
    of PRINT statements. Viewed as test data, however, i.e., a series of
    inputs to
    a system whose behavior we wish to probe, the underlying motivation
    for their
    structure becomes intelligible. Simply put, it is the goal of the
    tests to
    exercise at least one representative of every meaningfully distinct
    type of
    syntactic structure or semantic behavior provided for in the
    language
    standard. This strategy is characteristic of testing in general: all
    one can
    do is submit a representative subset of the typically infinite
    number of
    possible inputs to the system under investigation (the implementation)
    and see
    whether the results are in accord with the specifications for that
    system (the
    language standard).

    Thus, successful results of the tests are necessary, but NOT
    sufficient to
    show that the specifications are met. A failed test shows that a
    language
    implementation is not standard. A passed test shows that it may be.
    A long
    series of passed tests which seem to cover all the various aspects
    of the
    language gives us a large measure of confidence that the
    implementation
    conforms to the standard.

    It can scarcely be stressed too strongly that the Test Programs
    do NOT
    represent some self-sufficient algorithm which will automatically
    deliver
    correct results to a passive observer. Rather, they are best seen
    as one
    component in a larger system comprising not only the programs,
    but the
    documentation of the programs, the documentation of the processor
    under test,
    and, not least, a reasonably well-informed user who must actively
    interpret
    the results of the tests in the context of some broad background
    knowledge
    about the programs, the processor, and the language standard. If, for
    example,
    a processor rejects a standard program, it certainly fails to conform
    to the
    standard; yet, this is a type of behavior which can hardly be detected
    by the
    program itself: only a human observer who knows that the processor
    must accept
    standard programs, and that this program IS standard, is capable of
    the proper
    judgment that this processor therefore violates the language standard.


    3.2 Special issues raised by the standard requirements
    ------------------------------------------------------

    3.2.1 Implementation-defined features
    -------------------------------------

    At several points in the standard, processors are given a choice about
    how to
    implement certain features. These subjects of choice are listed in
    Appendix C
    of the standard. In order to conform, implementations must be
    accompanied by
    documentation describing their treatment of these features (see
    Section
    1.4.2(7) of the standard). Many of these choices, especially those
    concerning
    numeric precision, string and numeric overflow, and uninitialized
    variables,
    can have a marked effect on the result of executing even standard
    programs. A
    given program, for instance, might execute without exceptions on one
    standard
    implementation, and cause overflow on another, with a notably
    different
    numeric result. The programs that test features in these areas
    call for
    especially careful interpretation by the user.

    Another class of implementation-defined features is that
    associated with
    language enhancements. If an implementation executes non-standard
    programs, it
    also must document the meaning it assigns to the non-standard
    constructions
    within them. For instance, if an implementation allows comparison of
    strings
    with a less-than operator, it must document its interpretation
    of this
    comparison.


    3.2.2 Error and exception reporting
    -----------------------------------

    The standard for BASIC, in view of its intended user base of
    beginning and
    casual programmers, attempts to specify what a conforming processor
    must do
    when confronted with non-standard circumstances. There are two ways
    in which
    this can happen: 1) a program submitted to the processor might not
    conform to
    the standard syntactic rules, or 2) the executing program might
    attempt some
    operation for which there is no reasonable semantic
    interpretation, e.g.,
    division by zero, assignment to a subscripted variable outside of
    the array.
    In the BASIC standard, the first case is called an "error", and the
    second an
    "exception" and, in order to conform, a processor must take certain
    actions
    upon encountering either sort of anomaly.

    Given a program with a syntactically non-standard construction, the
    processor
    must either reject the program with a message to the user noting
    the reason
    for rejection, or, if it accepts the program, it must be
    accompanied by
    documentation which describes the interpretation of the construction.

    If a condition defined as an exception arises in the course of
    execution, the
    processor is obliged, first to report the exception, and then to do
    one of two
    things, depending on the type of exception: either it must apply a
    so-called
    recovery procedure and continue execution, or it must terminate
    execution.

    Note that it is the user, not the program, who must determine
    whether there
    has been an adequate error or exception report, or whether
    appropriate
    documentation exists. The pseudo-code in Figure 1 describes how
    conforming
    implementations must treat errors. It may be thought of as an
    algorithm which
    the USER (not the programs) must execute in order to interpret
    correctly the
    effect of submitting a test program to an implementation.

    Error handling
    --------------

    if program is standard
    if program accepted by processor
    if correct results and behavior
    processor PASSES
    else
    processor FAILS (incorrect interpretation)
    endif
    else
    processor FAILS (rejects standard program)
    endif
    else (program non-standard)
    if program accepted by processor
    if non-standard feature correctly documented
    processor PASSES
    else
    processor FAILS (incorrect/missing documentation
    for non-standard feature) (*)
    endif
    else (non-standard program rejected)
    if appropriate error message
    processor PASSES
    else
    processor FAILS (did not report reason for rejection)
    endif
    endif
    endif

    (*) = Note that ALL implementation-defined features must be
    documented
    (see Appendix C in the ANSI standard), not just non-standard
    features.


    ----------------------------------------------------------------------

    Exception handling
    ------------------

    if processor reports exception
    if procedure is specified for exception
    and host system capable of procedure
    if processor follows specified procedure
    processor PASSES
    else
    processor FAILS (recovery procedure not followed)
    endif
    else (no procedure specified, or unable to handle)
    if processor terminates program
    processor PASSES
    else
    processor FAILS (non-termination on fatal exception)
    endif
    endif
    else
    processor FAILS (fail to report exception)
    endif

    Figure 1. Error and exception handling

    The procedure for error handling in Figure 1 speaks of a processor
    accepting
    or rejecting a program. The Glossary (Section 19) of the standard
    defines
    "accept" as "to acknowledge as being valid". A processor, then, is
    said to
    reject a program if it, in some way, signifies to the user that an
    invalid
    construction (and not just an exception) has been found,
    whenever it
    encounters the presumably non-standard construction, or if the
    processor
    simply fails to execute the program at all. A processor implicitly
    accepts a
    program if the processor encounters all constructions within the
    program with
    no indication to the user that the program contains constructions
    ruled out by
    the standard or the implementation's documentation.

    In like manner, we can construct pseudo-code operating instructions
    to the
    user, which describes how to determine whether an EXCEPTION has been
    handled
    in conformance with the standard, and this is shown also in Figure 1.

    As a point of clarification, it should be understood that these
    categories of
    error and exception apply to all implementations, both
    compilers and
    interpreters, even though they are more easily understood in
    terms of a
    compiler, which first does all the syntax checking and then all the
    execution,
    than of an interpreter. There is no requirement, for instance,
    that error
    reports precede exception reports. It is the content, rather than the
    timing,
    of the message that the standard implies. Messages to reject
    errors should
    stress the fact of ill-formed source code. Exception reports should
    note the
    conditions, such as data values or flow of control, that are abnormal,
    without
    implying that the source code per se is invalid.


    Section 4: Structure of the test system
    ---------------------------------------

    The design of the Test Programs is an attempt to harmonize several
    disparate
    goals: 1) exercise all the individual parts of the standard,
    2) test
    combinations of features where it seems likely that the interaction
    of these
    features is vulnerable to incorrect implementation, 3) minimize the
    number of
    tests, 4) make the tests easy to use and their results easy to
    interpret, and
    5) give the user helpful information about the implementation
    even, if
    possible, in the case of failure of a test. The rest of this section
    describes
    the strategy we ultimately adopted, and its relationship to
    conformance and to
    interpretation by the user of the programs.


    4.1 Testing features before using them
    --------------------------------------

    Perhaps the most difficult problem of design is to find some
    organizing
    principle which suggests a natural sequence to the programs. In many
    ways, the
    most natural and simple approach is simply to test the language
    features in
    the order they appear in the standard itself. The major problem
    with this
    strategy is that the tests must then use untested features, in
    order to
    exercise the features of immediate interest. This raises the
    possibility that
    the feature ostensibly being tested might wrongly pass the test
    because of a
    flaw in the implementation of the feature whose validity is
    implicitly being
    assumed. Furthermore, when a test does report a failure, it is
    not clear
    whether the true cause of the failure was the feature under test or
    one of the
    untested features being used.

    These considerations seemed compelling enough that we decided to
    order the
    tests according to the principle of testing features before using
    them. This
    approach is not without its own problems, however. First and most
    importantly,
    it destroys any simple correspondence between the tests and sections
    of the
    standard. The testing of a given section may well be scattered
    throughout the
    entire test sequence, and it is not a trivial task to identify
    just those
    tests whose results pertain to the section of interest. To
    ameliorate this
    problem, we have been careful to note at the beginning of each test
    just which
    sections of the standard it applies to, and have compiled a cross-
    reference
    listing (see Section 6.3), so that you may quickly find the tests
    relevant to
    a particular section. A second problem is that, occasionally, the
    programming
    of a test becomes artificially awkward, because the language
    feature
    appropriate for a certain task has not been tested yet. While the
    programs
    generally abide by the test-before-use rule, there are some cases in
    which the
    price in programming efficiency and convenience is simply too
    high and,
    therefore, a few of the programs do employ untested features.
    When this
    happens, however, the program always generates a message telling
    you which
    untested feature it is depending on. Furthermore, we were careful to
    use the
    untested feature in a simple way unlikely to interact with the
    feature under
    test, so as to mask errors in its own implementation.


    4.2 Hierarchical organization of the tests
    ------------------------------------------

    Within the constraints imposed by the test-before-use rule, we tried
    to group
    together functionally related tests. This grouping should also
    help you
    interpret the tests better, since you can usually concentrate on one
    part of
    the standard at a time, even if the parts themselves are not in order.
    Section
    6.1 of this manual contains a summary of the hierarchical group
    structure. It
    relates a functional subject to a sequential range of tests, and also
    to the
    corresponding sections of the standard. WE STRONGLY RECOMMEND THAT
    YOU READ
    THE RELEVANT SECTIONS OF THE STANDARD CAREFULLY BEFORE RUNNING THE
    TESTS IN A
    PARTICULAR GROUP. The documentation contained herein explains the
    rationale
    for the tests in each group, but it is not a substitute for a
    detailed
    understanding of the standard itself.

    Many of the individual test programs are themselves further broken
    down into
    so-called "sections". Thus, the overall hierarchical subdivision
    scheme is
    given by, from largest to smallest: system, groups, sub-groups,
    programs,
    sections. Program sections are further discussed below
    under: 4.4.3
    Documentation.


    4.3 Environment assumptions
    ---------------------------

    The test programs are oriented towards executing in an
    interactive
    environment but, generally, can be run in batch mode as well. Some
    of the
    programs do require input, however, and these present more of a
    problem, since
    the input needed often depends on the immediately preceding output
    of the
    program. See the sample output in Volume 2 for help in setting up
    data files,
    if you plan to run all the programs non-interactively. The programs
    which use
    the INPUT statement are 73, 81, 84, 107-113, and 203.

    We have tried to keep the storage required for execution within
    reasonable
    bounds. Array sizes are as small as possible, consistent with
    adequate
    testing. No program exceeds 300 lines in length. The programs
    print many
    informative messages which may be changed without affecting the
    outcome of the
    tests. If your implementation cannot handle a program because of its
    size, you
    should set up a temporary copy of the program with the informative
    messages
    cut down to a minimum, and use that version. Be careful not to omit
    printing,
    which is a substantive part of the test itself.

    The tests assume that the implementation-defined margin for output
    lines is at
    least 72 characters long, and contains at least 5 print zones. This
    should not
    be confused with the length of a line in the source code itself. The
    standard
    requires implementations to accept source lines up to 72 characters
    long. If
    the margin is smaller than 72, the tests should still run (according
    to the
    standard), but the output will be aesthetically less pleasing.

    Finally, the standard does not specify how the tests are to be
    submitted to
    the processor for execution. Therefore, the machine-readable part of
    the test
    system consists only of source code, i.e., there are no system
    control
    commands. It is your responsibility to submit the programs
    to the
    implementation in a natural way which does not violate the integrity
    of the
    tests.


    4.4 Operating and interpreting the tests
    ----------------------------------------

    This section will attempt to guide you through the practical aspects
    of using
    the test programs as a tool to measure implementation conformance.
    The more
    general issues of conformance are covered in Section 3 and, of course,
    in the
    standard itself, especially Sections 1 and 2 of the ANSI document.


    4.4.1 User checking vs. self-checking
    -------------------------------------

    ALL of the Test Programs require interpretation of their behavior by
    the user.
    As mentioned earlier, the user is an active component in the test
    system; the
    source code of the Test Programs is another component, subordinate to
    the test
    user. An important goal in the design of the programs was the
    minimization of
    the need for sophisticated interpretation of the test
    results; but
    minimization is not elimination. In the best case, the program will
    print out
    a conspicuous message indicating that the test passed or failed, and
    you need
    only interpret this message correctly. In other cases, you have to
    examine
    rather carefully the results and behavior of the program, and must
    apply the
    rules of the standard yourself. This interpretation is necessary in:

    1. Programs which test that PRINTed output is produced in a
    certain
    format.

    2. Programs which test that termination occurs at the correct
    time (this
    arises in many of the exception tests).

    3. Programs for which conformance depends on the existence of
    adequate
    documentation of implementation-defined features (both those
    defined
    in Appendix C of the standard, and for any of the error tests
    that are
    accepted).

    The Test Programs are an only partially automated solution to the
    problem of
    determining processor conformance. Naive reliance on the Test
    Programs alone
    can very well lead to an incorrect judgment about whether an
    implementation
    meets the standard.


    4.4.2 Types of tests
    --------------------

    There are four types of Test Programs: 1) standard, 2) exception,
    3) error,
    and 4) informative. Within each of the functional groups
    (described above,
    Section 4.2), the tests occur in that order, although not all groups
    have all
    four types. The rules that pertain to each type follow. IT IS QUITE
    IMPORTANT
    THAT YOU BE AWARE OF WHICH TYPE OF TEST YOU ARE RUNNING, AND USE
    THE RULES
    WHICH APPLY TO THAT TYPE.


    4.4.2.1 Standard tests
    ----------------------

    These tests are the ones whose title does not begin with
    "EXCEPTION" or
    "ERROR", and which generate a message about passing or failing at the
    end of
    each section. The paragraph below on documentation describes the
    concept of
    sections of a test. SINCE THESE PROGRAMS ARE SYNTACTICALLY STANDARD
    AND RAISE
    NO EXCEPTION CONDITIONS, THEY MUST BE ACCEPTED AND EXECUTED TO
    COMPLETION BY
    THE IMPLEMENTATION. If the implementation fails to do this, it has
    failed the
    test. For example, if the implementation fails to recognize the
    keyword
    OPTION, or if it does not accept one of the numeric constants, then
    the test
    has failed. Quite obviously, it is you who must apply this rule,
    since the
    program itself will not execute at all.

    Assuming that the implementation does process the program, the next
    question
    is whether it has done so correctly. The program may be able to
    determine this
    itself, or you may have to do some active interpretation of the
    results. See
    the section below on documentation for more detail.


    4.4.2.2 Exception tests
    -----------------------

    These tests have titles that begin with the word "EXCEPTION", and
    examine the
    behavior of the implementation when exception conditions
    occur during
    execution. Nonetheless, these programs are also standard
    conforming (i.e.,
    syntactically valid) and, thus, the implementation must accept and
    process
    them.

    There are two special considerations. The first is the distinction
    between so-
    called fatal and non-fatal exceptions. Some exceptions in the standard
    specify
    a recovery procedure which allows continued execution of its
    program, while
    others (the fatal exceptions) do not. If no recovery procedure is
    specified,
    the implementation must report the exception, and then terminate the
    program.
    Programs testing fatal exceptions will print out a message that they
    are about
    to attempt the instruction causing the exception. If execution
    proceeds beyond
    that point, the test fails and prints a message so stating. With
    the non-
    fatal exceptions, the test program attempts to discover whether the
    recovery
    procedure has been applied or not and, in this instance, the test is
    much like
    the standard tests, where the question is whether the
    implementation has
    followed the semantic rules correctly. For instance, the semantic
    meaning of
    division by zero is to report the exception, supply machine
    infinity, and
    continue. The standard, however, allows implementations to terminate
    execution
    after even a non-fatal exception "if restrictions imposed by the
    hardware or
    operating environment make it impossible to follow the given
    procedures."
    Because it would be redundant to keep noting this allowance, the Test
    Programs
    do NOT print such a message for each non-fatal exception.
    THEREFORE, WHEN
    RUNNING A TEST FOR A NON-FATAL EXCEPTION, NOTE THAT THE
    IMPLEMENTATION MAY,
    UNDER THE STATED CIRCUMSTANCES, TERMINATE THE PROGRAM, RATHER THAN
    APPLY THE
    RECOVERY PROCEDURE.

    The second special consideration is that, in the case of INPUT and
    numeric and
    string overflow, the precise conditions for the exception
    can be
    implementation-defined. It is possible, therefore, that a standard
    program,
    executing on two different standard-conforming processors, using
    the same
    data, could cause an exception in one implementation, and not in
    the other.
    The tests attempt to force the exception to occur, but it could
    happen,
    especially in the case of string overlow, that a syntactically
    standard
    program cannot force such an exception in a given processor. THE
    DOCUMENTATION
    ACCOMPANYING THE IMPLEMENTATION UNDER TEST MUST DESCRIBE
    CORRECTLY THOSE
    IMPLEMENTATION-DEFINED FEATURES UPON WHICH THE OCCURRENCE OF
    EXCEPTIONS
    DEPENDS. That is, it must be possible to find out from the
    documentation
    whether and when overflow and INPUT exceptions will occur in
    the test
    programs.

    There is a summary of the requirements for exception handling in the
    form of
    pseudo-code in Section 3.2.2 (Figure 1).


    4.4.2.3 Error tests
    -------------------

    These tests have titles that begin with the word "ERROR", and
    examine how a
    processor handles a non-standard program. Each of these programs
    contains a
    syntactic construction explicitly ruled out by the standard, either
    in the
    various syntax sections, or in the semantics sections. GIVEN A PROGRAM
    WITH A
    SYNTACTICALLY NON-STANDARD CONSTRUCTION, THE PROCESSOR MUST EITHER
    REJECT THE
    PROGRAM WITH A MESSAGE TO THE USER NOTING THE REASON FOR REJECTION
    OR, IF IT
    ACCEPTS THE PROGRAM, IT MUST BE ACCOMPANIED BY DOCUMENTATION WHICH
    DESCRIBES
    THE INTERPRETATION OF THE CONSTRUCTION. Testing this requirement
    involves the
    submission of deliberately illegal programs to the processor, to
    see if it
    will produce an appropriate message or, if it contains an enhancement
    of the
    language such as to assign a semantic meaning to the error. Thus, we
    are faced
    with an interesting selection problem: out of the infinity of non-
    standard
    programs, which are worth submitting to the processor? Three
    criteria seem
    reasonable to apply:

    1. Test errors which we might expect would be most
    difficult for a
    processor to detect, e.g., violations of context-
    sensitive
    constraints. These are the ones ruled out by the
    semantics, rather
    than syntax, sections of the standard.

    2. Test errors likely to be made by beginners, for example use of
    a two-
    character array name.

    3. Test errors for which there may very well exist a
    language
    enhancement, e.g., comparing strings with "<" and ">".

    Based on these criteria, the test system contains programs for the
    errors in
    the two lists which follow. The first list is for constructions ruled
    out by
    the semantics sections alone (these, usually, are instances of
    context-
    sensitive syntax constraints) and the second for plausible syntax
    errors ruled
    out by the BNF productions.

    Context-sensitive errors:

    1. Line number out of strictly ascending order
    2. Line number of zero
    3. Line-length > 72 characters
    4. Use of an undefined user function
    5. Use of a function before its definition
    6. Recursive function definition
    7. Duplicate function definition
    8. Number of arguments in function invocation <> number of
    parameters in
    function definition
    9. Reference to numeric-supplied-function with incorrect
    number of
    arguments
    10. No spaces around keywords
    11. Spaces within keywords and other elements, or before line
    number
    12. Non-existent line number for GOTO, GOSUB, IF-THEN, ON-GOTO
    13. Mismatch of control variables in FOR-blocks (e.g.,
    interleaving)
    14. Nested FOR-blocks with same variable
    15. Jump into FOR-block
    16. Conflict on number of dimensions among references: A, A(1),
    A(1,1)
    17. Conflict on number of dimensions between DIM and reference,
    e.g., DIM
    A(20) and either A or A(2,2)
    18. Reference to subscripted variable followed by DIMensioning
    thereof
    19. Multiple OPTION statements
    20. OPTION follows reference to subscripted variable
    21. OPTION follows DIM
    22. OPTION BASE 1 followed by DIM A(0)
    23. DIM of same variable twice


    Context-free errors:

    1. Use of long name for array, e.g., A1(1)
    2. Assignment of string to number, and number to string
    3. Assignment without the keyword LET
    4. Comparison of two strings for < or >
    5. Comparison of a string with a number
    6. Unmatched parenthesis in expression
    7. FOR without matching NEXT, and vice-versa
    8. Multiple parameters in parameter list
    9. Line without line-number
    10. Line number longer than four digits
    11. Quoted strings containing the quote character (") or lowercase
    letters
    12. Unquoted strings containing quoted-string-characters
    13. Type mismatch on function reference (using string as an
    argument)
    14. DEF with string variable for parameter
    15. DEF with multiple parameters
    16. Misplaced or missing END-statement
    17. Null entries in various lists (INPUT, DATA, READ, e.g.)
    18. Use of "**" as involution (exponentiation) operator
    19. Adjacent operators, such as 2 ^ -4

    When developing programs to test for possible enhancements, we also
    tried to
    assist the user in confirming what the actual processor behavior is,
    so that
    it may be checked against the documentation. For example, the
    program that
    tests whether the implementation accepts "<" and ">" for comparison of
    strings
    also displays the implicit character collating sequence if the
    comparisons are
    accepted. When the implementation accepts an error program, be sure
    to check
    that the documentation does, in fact, decribe the actual
    interpretation of the
    error, as exhibited by the test program. If the error program is
    rejected, the
    processor's error message should be a reasonably accurate description
    of the
    erroneous construction.

    There is a summary of the requirements for error handling in the
    form of
    pseudo-code in Section 3.2.2 (Figure 1).


    4.4.2.4 Informative tests
    -------------------------

    Informative tests are very much like standard tests. The
    implementation MUST
    accept and process them, since they are syntactically standard. The
    difference
    is that the standard only recommends, rather than requires, certain
    aspects of
    their behavior. The pass/fail message (described below) and other
    program
    output indicates when a test is informative, and not mandatory.
    All the
    informative tests have to do with the quality (as opposed to the
    existence) of
    various mathematical facilities. Specifically, the accuracy of the
    numeric
    operations and approximated functions, and the randomness of the RND
    function,
    are the subjects of informative tests. Some of the standard tests
    also have
    individual sections which are informative, and again the pass/fail
    message is
    the key to which sections are informative, and which mandatory. If
    numeric
    accuracy is important for your purposes, either as an implementor or
    a user,
    you should analyze closely the results of the informative tests.


    4.4.3 Documentation
    -------------------

    There are three kinds of documentation in the test system,
    serving three
    complementary purposes:

    1. The user's manual (this document).
    The purpose of this manual is to provide a global description
    of the
    test system, and how it relates to the standard and to
    conformance. At
    a more detailed level, there is also a description of each
    functional
    group of programs, and the particular things you should watch
    for when
    running that group.

    2. Program output.
    As far as possible, the programs attempt to explain
    themselves, and
    how they must be interpreted to determine conformance.
    Nonetheless,
    they make sense only in the context of some background
    knowledge of
    the BASIC standard and conformance (more detail below
    on output
    format).

    3. Remarks in the source code.
    Using the REM statement, the programs attempt to clarify
    their own
    internal logic, should you care to examine it. Many of the
    programs
    are algorithmically trivial enough that remarks are
    superfluous but,
    otherwise, remarks are there to guide your understanding of
    how the
    programs are intended to work.

    There is a format for program output consistent throughout the test
    sequence.
    The program first prints its identifying sequence number and title.
    The next
    line lists the sections of the ANSI standard to which this test
    applies. After
    this program header, there is general information, if any, pertaining
    to the
    whole program. Following all this program-level output, there is a
    series of
    one or more sections, numbered sequentially within the program
    number. Each
    section tests one aspect of the general feature being exercised
    by the
    program. Every section header displays the section number and title,
    and any
    information pertinent to that section. Then, the message:
    "BEGIN TEST."
    appears, after which the program attempts execution of the feature
    under test.
    At this point, the test may print information to help the user
    understand how
    execution is proceeding.

    Then comes the important part: a message, surrounded by asterisks,
    announcing
    "*** TEST PASSED ***" or "*** TEST FAILED ***". If the test cannot
    diagnose
    its own behavior, it will print a CONDITIONAL pass/fail message,
    prefacing the
    standard message with a description of what must or must not have
    happened for
    the test to pass. Be careful to understand and apply these
    conditions
    correctly. It is a good idea to read the ANSI standard with special
    attention
    in conjunction with this sort of test, so that you can better
    understand the
    point of the particular section.

    There is no pass/fail message for the error tests, since there is, of
    course,
    no standard semantics prescribed for a non-standard construction. As
    mentioned
    above, error programs usually generate messages to help you
    diagnose the
    behavior of the processor when it does accept such a program.

    After the pass/fail message will come a line containing "END
    TEST." which
    signals that the section is finished. If there is another section, the
    section
    header will appear next. If not, there will be a message announcing
    the end of
    the program. NOTE THAT EACH SECTION PASSES OR FAILS
    INDEPENDENTLY; ALL
    SECTIONS, NOT JUST THE LAST, MUST PRINT "*** TEST PASSED ***" FOR THE
    PROGRAM
    AS A WHOLE TO PASS. Figure 2 contains a schematic outline of standard
    program
    output.

    PROGRAM FILE nn: descriptive program title.
    ANSI STANDARD xx.x, yy.y ...

    Message if a feature is used before being tested, cf. Section
    4.1, and
    general remarks about the purpose of the program.

    SECTION nn.1: descriptive section title.

    Interpretive message for error or exception tests, and general
    remarks
    about the purpose of this section.

    BEGIN TEST.

    Function-specific messages and test results

    *** TEST PASSED (or FAILED) ***
    or
    *** INFORMATIVE TEST PASSED (or FAILED) ***
    or
    Conditional pass/fail message when it cannot be determined
    internally
    or
    Message to assist analysis of processor behavior for error
    program

    END TEST.

    SECTION nn.2: descriptive section title.
    ...
    SECTION nn.m: descriptive section title.
    ...
    END PROGRAM nn

    Figure 2. Format of test program output


    Section 5: Functional groups of test programs
    ---------------------------------------------

    This section contains information specific to each of the groups
    and sub-
    groups of programs within the test sequence. Groups are
    arranged
    hierarchically, as reflected in the numbering system. The sub-section
    numbers
    within this section correspond to the group numbering in the table of
    Section
    6.1, e.g., Section 5.12.1.2 of the manual describes functional group
    12.1.2.

    It is the purpose of this section to help you understand the
    overall
    objectives and context of the tests, by providing information
    supplementary to
    that already in the tests. This section will generally NOT
    simply repeat
    information contained in the tests themselves, except for emphasis.
    Where the
    tests require considerable user interpretation, this documentation
    will give
    you the needed background information. Where the tests are self-
    checking, this
    documentation will be correspondingly brief. We suggest that you
    first read
    the comments in this section to get the general idea of what the
    tests are
    trying to do, read the relevant sections of the ANSI standard to
    learn the
    precise rules, and finally run the programs themselves, comparing
    their output
    to the sample output in Volume 2. The messages written by the test
    programs
    are intended to tell you in detail just what behavior is necessary
    to pass,
    but these messages are not the vehicle for EXPLAINING how that
    criterion is
    derived from the standard. Program output should be reasonably
    intelligible by
    itself, but it is better understood in the broader context of the
    standard and
    its conformance rules.


    5.1 Simple PRINTing of string constants
    ---------------------------------------

    This group consists of one program which tests that the
    implementation is
    capable of the most primitive type of PRINTing, that of string
    constants, and
    also the null PRINT. Note that it is entirely up to you to determine
    whether
    the test passes or fails, by assuring that the program output is
    consistent
    with the expected output. The program's own messages describe
    what is
    expected. You may also refer to the sample output in Volume 2 to see
    what the
    output should look like.


    5.2 END and STOP
    ----------------

    This group tests the means of bringing BASIC programs to normal
    termination.
    These capabilities are tested early, since all the programs use them.
    Both END
    and STOP cause execution to stop when encountered, but STOP
    may appear
    anywhere in the program, any number of times. There must be exactly
    one END
    statement in a program, and it must be the last line in the source
    code. Thus,
    END serves both as a syntactic marker for the end of the program, and
    is also
    executable.

    Since the program cannot know when it has ended (although it can know
    when it
    has not), you must assure that the programs terminate at the right
    time.


    5.3 PRINTing and simple assignment (LET)
    ----------------------------------------

    This group of programs examines the ability of the implementation
    to print
    strings and numbers correctly. Both constants and variables are
    tested as
    print-items. The variables, of course, have to be given a value
    before they
    are printed, and this is done with the LET statement.

    PRINT is among the most semantically complex statements in BASIC.
    Furthermore,
    the PRINT statement is the outstanding case of a feature whose
    operation
    cannot be checked internally. The consequence is that THIS GROUP CALLS
    FOR THE
    MOST SOPHISTICATED USER INTERPRETATION OF ANY IN THE TEST
    SEQUENCE. Please
    read carefully the specifications in the programs, Section 12 of
    the ANSI
    standard, and this documentation; the interpretation of test
    results should
    then be reasonably clear.

    The emphasis in this group is on the correct representation of
    numeric and
    string values. There is some testing that TAB, comma, and semi-colon
    perform
    their functions, but a challenging exercise of these features is
    deferred
    until group 14.6, because of the other features needed to test them.


    5.3.1 String variables and TAB
    ------------------------------

    The PRINTing of strings is fairly straightforward, and should be
    relatively
    easy to check, since there are no implementation-defined features
    which affect
    the printing. The only possible problem is the margin width. The
    program
    assumes a margin of at least 60 characters, with at least 4 print
    zones. If
    your implementation supports a shorter margin, you must make due
    allowance for
    it. The standard does not prescribe a minimum margin.

    The string overflow test requires careful interpretation. Your
    implementation
    must have a defined maximum string length, and the fatal
    exception should
    occur on the assignment corresponding to that documented length.
    If the
    implementation supports at least 58 characters in the string,
    overflow should
    not occur. Be sure, if there is no overflow exception report,
    that the
    processor has, indeed, not lost data. Do this by checking that the
    output has
    not been truncated. A processor that loses string data without
    reporting
    overflow definitely fails.

    Checking for a TAB exception is simple enough; just follow the
    conditional
    pass/fail messages closely. Note that one section of the test
    should NOT
    generate an exception since, even though the argument itself is less
    than one,
    its value becomes one after rounding.


    5.3.2 Numeric constants and variables
    -------------------------------------

    In the following discussion, the terms "significand", "exrad",
    "explicit
    point", "implicit point" and "scaled" are used in accordance with the
    meaning
    ascribed them in the ANSI standard.

    The rules for printing numeric values are fairly elaborate, and,
    moreover, are
    heavily implementation-dependent; accordingly, conscientious
    scrutiny is in
    order. There are two rules to keep in mind. First, the expected
    output format
    depends on the VALUE of the print-item, not its source format. In
    particular,
    integer values should print as integers as long as the significand-
    width can
    accommodate them, fractional values should print in explicit point
    unscaled
    format where no loss of accuracy results, and the rest should
    point in
    explicit point scaled format. For example, "PRINT 2.1E2" should
    produce "210"
    because the item has an integer value, even though it is written
    in source
    code in explicit point scaled format. Second, leading zeros in the
    exrad, and
    trailing zeros in the significand, may be omitted. Thus, for an
    implementation
    with a significand-width of 8 and an exrad-width of 3, the value
    1,230,000,000
    could print as "1.2300000E+009" at one extreme, or "1.23E+9" at the
    other. The
    tests generally display the expected output in the latter form, but
    it should
    be understood that extra zeros can be tacked on to the actual
    output, up to
    the widths specified for the implementation.

    The tests, in general, are oriented toward the minimum requirements
    of six
    decimal digits of accuracy, a significand length of six, and an exrad-
    width of
    two. You must apply the standard requirements in terms of
    your own
    implementation's widths, however.


    5.4 Control statements and REM
    ------------------------------

    This group checks that the simple control structures all work when
    used in a
    simple way. Some of the same facilities are checked more rigorously
    in later
    groups. As with PRINT, END, and STOP, these features must come early
    in the
    test sequence, since a BASIC program cannot do much of consequence
    without
    them. If any of these tests fail, the validity of much of the rest of
    the test
    sequence is doubtful, since following tests rely heavily on GOTO,
    GOSUB, and
    IF. Note especially that trailing blanks should be significant in
    comparing
    strings, e.g. "ABC" <> "ABC ". Subsequent tests which rely on this
    property
    of IF will give false results if the implementation does not
    process the
    comparison properly.

    The tests for GOTO and GOSUB exercise a variety of transfers, to make
    sure the
    processor handles control correctly. If everything works, you
    should get
    intelligible, self-consistent output. If the output looks scrambled,
    the test
    has failed. There are no helpful diagnostics for failures,
    since it is
    impossible to anticipate exactly how a processor might misinterpret
    transfers
    of control. Look carefully at the sample output for the GOTO
    and GOSUB
    programs in Volume 2, to know what to expect.

    The IF-THEN tests use a somewhat complex algorithm, so pay attention
    to the
    REM statements if you are trying to understand the logic. On the
    other hand,
    these tests are easy to use because they are completely self-
    checking. You
    need only look for the pass/fail messages to see if they worked. It
    is worth
    noting that the IF-THEN test for numeric values depends on the
    validity of the
    IF-THEN test for strings, which comes just before.

    The error tests are understandable in light of the general
    rules for
    interpretation of error programs given earlier.


    5.5 Variables
    -------------

    The first of these programs simply checks that the set of valid
    names is as
    guaranteed by the standard. In particular, A, A0, and A$ are all
    distinct.
    There are no diagnostics for failure, since we expect failures to be
    rare, and
    it is simple enough to isolate the misinterpretation by modifying the
    program,
    if that proves necessary. A later test in group 8.1 tests
    that the
    implementation fulfills the requirements for array names.

    Default initialization of variables is one of the most important
    aspects of
    semantics left to implementation definition. Implementations may
    treat this
    however they want to, but it must be documented, and you should check
    that the
    documentation agrees with the behavior of the program. Thus, this
    is NOT
    merely an informative test; the processor must have correct
    documentation for
    its behavior, in order to conform.


    5.6 Numeric constants, variables, and operations
    ------------------------------------------------

    5.6.1 Standard capabilities
    ---------------------------

    This group of programs introduces the use of numeric expressions,
    specifically
    those formed with the arithmetic operations (+, -, *, /, ^) provided
    in BASIC.
    The most troublesome aspect of these tests is the explicit disavowal,
    in the
    standard, of any criterion of accuracy for the result of the
    operations. Thus,
    it becomes somewhat difficult to say at what point a processor
    fails to
    implement a given operation. We finally decided to require exact
    results only
    for integer arithmetic and, in the case of non-integral operands, to
    apply an
    extremely loose criterion of accuracy such that, if an
    implementation failed
    to meet it, one could reasonably conclude either that the precedence
    rules had
    been violated, or that the operation had not been implemented at all.

    Although the standard does not mandate accuracy for expressions,
    it does
    require that individual numbers be accurate to at least six
    significant
    decimal digits. This requirement is tested by assuring that
    values which
    differ by 1 in the 6th digit actually compare in the proper order,
    using the
    IF statement. The rationale for the accuracy test is best explained
    with an
    example: suppose we write the constant "333.333" somewhere in the
    program. For
    six digits of accuracy to be maintained, it must evaluate internally
    to some
    value between 333.3325 and 333.3335, since six digits of accuracy
    implies an
    error less than 5 in the 7th place. By the same reasoning,
    "333.334" must
    evaluate between 333.3335 and 333.3345. Since the allowable ranges
    do not
    overlap, the standard requires that 333.333 compare as strictly
    less than
    333.334. Of course, this same reasoning would apply to any two
    numbers which
    differed by 1 in the sixth digit.

    The accuracy test not only assures that these minimal requirements
    are met,
    but also attempts to measure how much accuracy the implementation
    actually
    provides. It does this both by comparing some numbers in the manner
    described
    above for 7, 8, and 9 decimal digits, and also by using an
    algorithm to
    compute any reasonable internal accuracy. Since such an algorithm
    is highly
    sensitive to the peculiarities of the system's implementation of
    arithmetic,
    this last test is informative only.


    5.6.2 Exceptions
    ----------------

    The standard specifies a variety of exceptions for numeric
    expressions. All
    the mandatory non-fatal exceptions occur when machine infinity is
    exceeded,
    and they all call for the implementation to supply machine infinity
    as the
    result, and continue execution. The tests ensure that machine
    infinity is at
    least as great as the guaranteed minimum of 1E38 but, since machine
    infinity
    is implementation-defined, you must assure that the value actually
    supplied is
    accurately documented.

    It is worth repeating here the general guidance that the timing of
    exception
    reports is not specified by the standard. The wording is
    intentionally
    imprecise, to allow implementations to anticipate exceptions, if they
    desire.
    Such anticipation may well occur for overflow and underflow of
    numeric
    constants; that is, an implementation may issue the exception
    report before
    execution of the program begins. Note that the recovery
    procedure,
    substitution of machine infinity for overflow, remains in effect.

    Underflow, whether for expressions or constants, is only
    recommended as an
    exception but, in any case, zero must be supplied when the magnitude
    of the
    result is below the minimum representable by the implementation.
    Note that
    this is required in the semantics Sections (7.4 and 5.4) of the
    standard, not
    the exception Sections (7.5 and 5.5).


    5.6.3 Errors
    ------------

    These programs try out the effect of various constructions which
    represent
    either common programming errors (missing parentheses) or common
    enhancements
    (** as the involution (exponentiation) operator), or a blend of
    the two
    (adjacent operators). No special interpretation rules apply to
    these tests,
    beyond those normally associated with error programs.


    5.6.4 Accuracy tests -- informative
    -----------------------------------

    Although the standard mandates no particular accuracy for
    expression
    evaluation, such accuracy is, nonetheless, an important measure of the
    quality
    of language implementation, and is of interest to a large
    proportion of
    language users. Accordingly, these tests apply a criterion of accuracy
    for the
    arithmetic operations which is suggested by the standard's
    requirement that
    individual numeric values be represented accurate to six significant
    decimal
    digits. Note, however, that these tests are informative, not only
    because
    there is no strict accuracy requirement, but also because
    there is no
    generally valid way for a computer to measure precisely the accuracy
    of its
    own operations. Such a measurement involves calculations which must
    use the
    very facilities being measured.

    The criterion for passing or failing is based on the concept
    that an
    implementation should be at least as accurate as a reasonable
    hypothetical
    implementation which uses the least accurate numeric representation
    allowed by
    the standard. It is best explained by first considering accuracy for
    functions
    of a single variable, and then generalizing to operations, which
    may be
    thought of as functions of two variables. Given an internal
    precision of at
    least d decimal digits, we simply require that the computed value
    for f(x)
    (hereinafter denoted by "cf(x)") be some value actually taken on
    by the
    function within the domain [x-e, x+e], where

    e = 10 ^ (int (log10 (abs (x) ) ) + 1 - d)

    For example, suppose we want to test the value returned by
    sin(29.1234) and we
    specify that d=6. Then:

    e = 10 ^ (int (log10 (29.1234) ) + 1 - 6)
    = 10 ^ (int (1.464) ( 6)
    = 1E-4

    and so we require that csin(x) equal some value taken on by sin(x)
    in the
    interval [29.1233, 29.1235]. This then reduces to the test that -.
    7507297957
    <= csin(29.1234) <= -.7505976588.

    The motivation for the formula for e is as follows. According to the
    rule for
    accuracy of numbers, the internal representation of the argument
    must lie
    within [x-e/2, x+e/2]. Now, suppose that the internal representation
    is near
    an endpoint of the legal interval, and that the granularity of the
    machine
    (i.e., the difference between adjacent internal numeric
    representations) in
    that region of the real number line is near e (which would be the
    coarsest
    allowed, given accuracy of d digits). Given this worst case, we
    would still
    want a value returned for which the actual argument was closer
    to that
    internal representation than to immediately adjacent
    representations. This
    means that we allow for a variation of e/2 when the argument is
    converted from
    source to internal form, and another variation of e/2 around the
    internal
    representation itself. The maximum allowable variation along the x-
    axis is
    then simply the sum of the worst-case variations: e/2 + e/2 = e.
    This is
    reasonable if we think of a given internal form as representing not
    only a
    point on the real number line, but the set of points for which
    there is no
    closer internal form. Then, all we know is that the source
    argument is
    somewhere within that set, and all we require is that the computed
    value of
    the function be true for some (probably different) argument within
    the set.
    For accuracy d, the maximum width of the set is, of course, e.

    It should be noted that the first allowed variation of e/2 is inherent
    in the
    process of decimal (source) to, e.g., binary (internal) conversion.
    The case
    for allowing a variation of e/2 around the internal representation
    itself is
    somewhat weaker. If one insists on exact results within the internal
    numerical
    manipulation, then the function would be allowed to vary only
    within the
    domain [x-e/2, x+e/2], but we did not require this in the tests.

    Note that the above scheme not only allows for the discrete nature
    of the
    machine, but also for numeric instability in the function
    itself.
    Mathematically, if the value of an argument is known to six places,
    it does
    NOT follow that the value of the function is known to six places;
    the error
    may be considerably more or less. For example, a function is often
    very stable
    near where its graph crosses the y-axis, but not the x-axis (e.g.,
    COS(1E-22))
    and very unstable where it crosses the x-axis but not the y-
    axis (e.g.,
    SIN(21.99)). By allowing the cf(x) to take on any value in the
    specified
    domain, we impose strict accuracy where it can be achieved, and
    permit low
    accuracy where appropriate. Thus, the pass/fail criterion is
    independent of
    both the argument and function; it reflects only how well the
    implementation
    computed, relative to a worst-case six-digit machine.

    Finally, we must recognize that, even if the value of a function is
    computable
    to high accuracy (as with COS(1E-22)), the graininess of the
    machine will
    again limit how accurately the result itself can be represented.
    For this
    reason, there is an additional allowance of e/2 around the
    result. This
    implies that, even if the result is computable to, say, 20 digits,
    we never
    require more than 6 digits of accuracy.

    Now, all the preceding comments generalize quite naturally to
    functions of
    many variables. We can then be guided in our treatment of the
    arithmetic
    operations by the above remarks on functions, if we recall that the
    operations
    may be thought of as functions of two variables, namely their
    operands. If we
    think of, say, subtraction as such a function (i.e., subtract (x,y)
    = x-y),
    then the same considerations of argument accuracy and mathematical
    stability
    pertain. Thus, we allow both operands to vary within their
    intervals, and
    simply require the result of the operation to be within the extreme
    values so
    generated. Note that such a technique would be necessary for any of
    the usual
    functions which take two variables, such as some versions of arctan.

    It should be stressed that the resulting accuracy tests represent only
    a very
    minimal requirement. The design goal was to permit even the grainiest
    machine
    allowed by the standard to pass the tests; ALL conforming
    implementations,
    then, are inherently capable of passing. Many users will wish to
    impose more
    stringent criteria. For example, those interested in high
    accuracy, or
    implementors whose machines carry more than six digits, should examine
    closely
    the computed value and true value, to see if the accuracy is what they
    expect.


    5.7 FOR-NEXT
    ------------

    The ANSI standard provides a loop capability, along with an
    associated
    control-variable, through the use of the FOR statement. The
    semantic
    requirements for this construction are particularly well-
    defined.
    Specifically, the effect of the FOR is described in terms of more
    primitive
    language features (IF, GOTO, LET, and REM), which are themselves
    not very
    vulnerable to misinterpretation. The tests, accordingly, are quite
    specific
    and extensive in the behavior they require. The standard tests are
    completely
    self-checking, since conformance depends only on the value of the
    control-
    variable and number of times through the loop. The general design plan
    was not
    only to determine passing or failing, but also to display information
    allowing
    the user to examine the progress of execution. This should help you
    diagnose
    any problems. Note especially the requirement that the control
    variable, upon
    exit from the loop, should have the first unused, not the last used,
    value.

    The FOR statement has no associated exceptions, but it does have
    a rich
    variety of errors, many of them context sensitive, and, therefore,
    somewhat
    harder for an implementation to detect. As always, if any error
    programs are
    accepted, the documentation must specify what meaning the
    implementation
    assigns to them.


    5.8 Arrays
    ----------

    5.8.1 Standard capabilities
    ---------------------------

    The standard provides for storing numeric values in one- or two-
    dimensional
    arrays. The tests for standard capabilities are all self-checking,
    and quite
    straightforward in exercising some feature defined in the standard.
    Note the
    requirement that subscript values be ROUNDED to integers; the program
    testing
    this must NOT cause an exception, or the processor fails.


    5.8.2 Exceptions
    ----------------

    The exception tests ensure that the subscript out of range
    condition is
    handled properly. Note that it is here that the real semantic
    meaning of
    OPTION and DIM are exercised; they have little effect other than to
    cause or
    prevent the subscript exception for certain subscript values. Since
    this is a
    fatal exception, you must check (since the program cannot) that the
    programs
    terminate at the right time, as indicated in their messages.


    5.8.3 Errors
    ------------

    As with the FOR statement, there are a considerable number of
    syntactic
    restrictions. The thrust of these restrictions is to assure
    that OPTION
    precedes DIM, that DIM precedes references to the arrays that it
    governs, and
    that declared subscript bounds are compatible.

    Three of the error programs call for INPUT from the user. This is to
    help you
    diagnose the actual behavior of the implementation, if it
    accepts the
    programs. The first of these, #73, lets you try to reference an array
    with a
    subscript of 0 or 1 when OPTION BASE 1 and DIM A(0) have been
    specified, to
    see when an exception occurs.

    The second, #81, allows you to try a subscript of 0 or 1 for an
    array whose
    DIM statement precedes the OPTION statement.

    The third program using INPUT, #84, is a bit more complex, and has to
    do with
    double dimensioning. If there are two DIM statements for the same
    array, the
    implementation has a choice of several plausible interpretations.
    We have
    noted five such possibilities, and have attempted to distinguish
    which, if
    any, seems to apply. Since the only semantic effect of DIM is to
    cause or
    prevent an exception for a given array reference, however, it is
    necessary to
    run the program three times to see when exceptions occur and when they
    do not,
    assuming the processor has not simply rejected the program
    outright. Your
    input-reply simply tells the program which of the three
    executions it is
    currently performing. For each execution, you must note whether an
    exception
    occurred or not, and then match the results against the table in the
    program.
    Suppose, for instance, that you get an exception the first time, but
    not the
    second or third. That would be incompatible with all five
    interpretations
    except number 4, which is that the first DIM statement executed sets
    the size
    of the array and it is never changed thereafter. As usual,
    check the
    documentation to make sure it correctly describes what happens.


    5.9 Control statements
    ----------------------

    This group fully exploits the properties of some of the control
    facilities
    which were tested in a simpler way in group 4. As before, there seemed
    no good
    way to provide diagnostics for failure of standard tests, since the
    behavior
    of a failing processor is impossible to predict. Passing
    implementations will
    cause the "*** TEST PASSED ***" message to appear, but certain
    kinds of
    failures might cause the programs to abort, without producing a
    failure
    message. Check Volume 2 for an example of correct output.


    5.9.1 GOSUB and RETURN
    ----------------------

    Most of the tests in this group are self-explanatory, but the one
    checking
    address stacking deserves some comment. The standard describes the
    effect of
    issuing GOSUBs and RETURNs in terms of a stack of return addresses,
    for which
    the GOSUB adds a new address to the top, and the RETURN uses the most
    recently
    added address. Thus, we get a kind of primitive recursion in the
    control
    structure (although without any stacking of "data"). Note
    that this
    description allows complete freedom in the placement of GOSUBs and
    RETURNs in
    the source code. There is no static association of any RETURN with
    any GOSUB.
    The test which verifies this specification computes binomial
    coefficients,
    using the usual recursive formula. The logic of the program
    is a bit
    convoluted, but intentionally so, in order to exercise the stacking
    mechanism
    vigorously.


    5.9.2 ON-GOTO
    -------------

    The ON-GOTO tests are all readily understandable. The one thing you
    might want
    to watch for is that the processor ROUNDS the expression controlling
    the ON-
    GOTO to the nearest integer, as specified in the standard. Thus, "ON .
    6 GOTO",
    "ON 1 GOTO", and "ON 1.4 GOTO" should all have the same effect;
    there should
    be no out of range exception for values between .5 and 1.


    5.10 READ, DATA, and RESTORE
    ----------------------------

    This group tests the facilities for establishing a stream of data
    in the
    program, and accessing it sequentially. This feature has
    some subtle
    requirements, and it would be wise to read the standard especially
    carefully,
    so that you understand the purpose of the tests.


    5.10.1 Standard capabilities
    ----------------------------

    All but the last of these tests are reasonably simple. The last test
    dealing
    with the general properties of READ and DATA, although self-
    checking, has
    somewhat complex internal logic. It assures that the range of operands
    of READ
    and DATA can overlap freely, and that a given datum can be read as
    numeric at
    one time, and as a string at a later time. If you need to examine the
    internal
    logic closely, be sure to use the REM statements at the beginning,
    which break
    down the structure of the READ and DATA lists for you.


    5.10.2 Exceptions
    -----------------

    The exceptions can be understood directly from the programs. Note
    that string
    overflow may or may not occur, depending on the implementation-defined
    maximum
    string length. If overflow (loss of data) does occur, the
    processor must
    report an exception, and execution must terminate. If there is no
    exception
    report, look carefully at the output, to assure that no loss of
    data has
    occurred.


    5.10.3 Errors
    -------------

    All of the error tests display results if the implementation
    accepts them,
    allowing you to check that the documentation matches the actual
    behavior of
    the processor. Some of the illegal constructs are likely
    candidates for
    enhancements, and thus the diagnostic feature is important, here.


    5.11 INPUT
    ----------

    This group, like that for PRINT, calls for a good deal of user
    participation.
    This participation takes the form, not only of interpreting program
    output,
    but also of supplying appropriate INPUT replies. The validity of
    this group
    depends strongly on the entry of correct replies.


    5.11.1 Standard capabilities
    ----------------------------

    The first program assures that the processor can accept as
    input ANY
    SYNTACTICALLY VALID NUMBER. It is absolutely essential, then, that
    you reply
    to each message with precisely the same set of characters that it asks
    for. If
    it tells you to enter "1.E22", you must not reply with, e.g.,
    "1E22". This
    would defeat one of the purposes of that reply, which is to see
    whether the
    processor correctly handles a decimal point immediately before the
    exponent.
    Once you have correctly entered the reply, one of several things can
    happen.
    If the processor in some way rejects the reply, for instance by
    producing a
    message that it is not a valid number, then THE PROCESSOR HAS FAILED
    THE TEST
    since all the replies are, in fact, valid according to the standard.
    To get by
    this problem, simply enter any number NOT numerically equal to
    the one
    originally requested. This will let you get on to the other items,
    and will
    signal a failure to the processor, as described below.

    If the processor accepts the reply, the program then tests that six
    digits of
    accuracy have been preserved. If so, you will get a message that the
    test is
    OK, and may go on to the next reply. If not, you will get a message
    indicating
    that the correct value was not received, and the program will ask if
    you want
    to retry that item. If you simply mistyped the original input-
    reply, you
    should enter the code for a retry. If your original reply was correct,
    but the
    processor misinterpreted the numeric value, there is no point to
    retrying;
    just go ahead to the next item. The program will count up all the
    failures,
    and report the total at the end of the program.

    The next program, for array input, assures that you can enter numbers
    into an
    array, and that assignments are done left to right, so that a
    statement such
    as "INPUT I, A(I)" allows you to control which element of the array
    gets the
    value. Also, it is here (and only here) that the standard's
    requirement for
    checking the input-reply before assignment is tested. Your first reply
    to this
    section of the test must cause an exception, and you must be allowed
    to re-
    enter the entire reply, OTHERWISE, THE TEST FAILS. The rest of the
    program is
    self-checking.

    The program for string input comes next and, as with the
    numeric input
    program, two considerations are paramount: 1) you should enter your
    replies
    exactly as indicated in the message, and 2) all input
    replies are
    syntactically valid and, therefore, IF THE IMPLEMENTATION REJECTS ANY
    OF THEM,
    IT FAILS THE TEST. A potentially troublesome aspect of this program
    is that
    the prompting message cannot always look exactly like your
    reply. In
    particular, your replies will sometimes include blanks and
    quotes. It is
    impossible to PRINT the quote character in Minimal BASIC, so the
    number-sign
    (#) is used instead. For ease of counting characters, an equals (=) is
    used in
    the message to represent blanks. THEREFORE, WHEN YOU SEE THE NUMBER-
    SIGN, TYPE
    THE QUOTE; AND WHEN YOU SEE THE EQUALS, TYPE THE BLANK. If you
    forget, the
    item will fail, and you will have a chance to retry, so make sure
    that a
    reported failure really is a processor failure, and not just
    your own
    mistyping before bypassing the retry. As with the numeric input,
    if the
    processor rejects one of the replies, simply enter any reply whose
    evaluation
    is not equal to that of the prompting message, to bypass the item and
    force a
    failure. The second section of the string input program does NOT
    use the
    substitute characters in the message; rather, you always type exactly
    what you
    see in the message SURROUNDED BY QUOTES.

    The program for mixed input follows the conventions individually
    established
    by the numeric and string input programs. Its purpose is simply to
    assure that
    the implementation can handle both string and numeric data in the same
    reply.


    5.11.2 Exceptions
    -----------------

    Unlike the other groups, where each exception type is tested with
    its own
    program, all the mandatory exceptions for INPUT are gathered into one
    routine.
    There are two reasons for this: first, there are so many
    variations worth
    trying that a separate program for each would be impractical, and
    second, the
    recovery procedures are the same for all input exception types. It
    is, then,
    both economical and convenient to group together all the various
    possibilities
    into one program. Underflow on INPUT is an optional exception,
    and has a
    different recovery procedure, governed by the semantics for numeric
    constants,
    rather than INPUT. It, therefore, is tested in its own separate
    program.

    The conformance requirements for input exceptions are perhaps the most
    complex
    of any in the standard. It is worthwhile to review these requirements
    in some
    detail, and then relate them to the test. The standard says that
    "unquoted-
    strings that are numeric-constants must be supplied as input for
    numeric-
    variables, and either quoted-strings or unquoted-strings must be
    supplied as
    input for string-variables". Since the syntactic entities mentioned
    are well-
    defined in the standard, this specification seems clear enough.
    Recall,
    however, that processors can, in general, enlarge the class of
    syntactic
    objects which they accept. In particular, a processor may have an
    enhanced
    definition of quoted-string, unquoted-string, numeric-constant,
    or, more
    generally, input-reply, and therefore accept a reply not strictly
    allowed by
    the standard, just as standard implementations may accept,
    and render
    meaningful, non-standard programs. The result is that the conditions
    for an
    input exception may depend on implementation-defined features, and
    thus a
    given input-reply may cause an exception for one processor, and
    yet not
    another. Note that the same situation prevails for overflow -- the
    exception
    depends on the implementation-defined maximum string length and
    machine
    infinity. Thus, "LET A = 1E37 * 100" may cause overflow on one
    standard
    processor, but not another.

    When running the program then, a given input-reply need not
    generate an
    exception if there is a documented enhancement which
    describes its
    interpretation. Of course, such an enhancement must not change the
    meaning of
    any input-reply which is syntactically standard. Note that, of the
    replies
    called for in the program, some are syntactically standard, and some
    are not;
    they should, however, all cause exceptions on a truly
    *minimal* BASIC
    processor, i.e., one with no syntactic enhancements, with machine
    infinity =
    1E38, and with maximum string length of 18.

    Another problem is that, for some replies, it is not clear which
    exception
    type applies. If, for instance, you respond to "INPUT A,B,C" with:
    "2,,3", it
    may be taken as a wrong type, since a numeric-constant was not
    supplied for B,
    or as insufficient data, since only two, not three, were supplied. In
    such a
    case, as with all exception reports, it is sufficient if the
    report is a
    reasonably accurate description of what went wrong, regardless of
    precisely
    how the report corresponds to the types defined in the standard.

    As will all non-fatal exceptions, it is permitted for an
    implementation to
    treat a given INPUT exception as fatal, if the hardware or
    operating
    environment makes the recovery procedure impossible to follow. The
    program is
    set up with table-driven logic, so that each exception is triggered
    by a set
    of values in a given DATA statement. If you need to separate out some
    of the
    cases because they cause the program to terminate, simply delete
    the DATA
    statements for those cases. REM statements in the program describe
    the format
    of the data.

    After that lengthy preliminary discussion, we are now ready to
    consider how to
    operate and interpret the test. The program will ask you for a reply,
    and also
    show you the INPUT statement to which it is directed, to help you
    understand
    why the exception should occur. Enter the exception-provoking reply,
    exactly
    as requested by the message. If all goes well, the implementation
    will give
    you an exception report, and allow you to re-supply the entire input-
    reply. On
    this second try, simply enter all zeros, exactly as many as needed
    by the
    complete original INPUT statement, to bypass that case -- this will
    signal the
    program that that case has passed, and you will then receive the next
    message.

    Now, let us look at what might go wrong. If the implementation simply
    accepts
    the initial input-reply, the program will display the
    resulting values
    assigned to the variables, and signal a possible failure. If the
    documentation
    for the processor describes an enhancement which agrees with
    the actual
    result, then that case passes; otherwise, it is a failure.

    Suppose the implementation reports an exception, but does not allow
    you to re-
    supply the ENTIRE input-reply. At that point, just do whatever the
    processor
    requires to bypass that case. You should supply non-zero input to
    signal the
    program that the case in question has failed.

    When the program detects an apparent failure (non-zeros in the
    variables), it
    allows you to retry the whole case. As before, if you mistyped,
    you should
    reply that you wish to retry; if the processor simply
    mishandled the
    exception, reject the retry and move on to the next case.

    Figure 3 outlines the user's operating and interpretation procedure
    for the
    INPUT exception test.

    Inspect message from program
    Supply exact copy of message as input-reply
    If processor reports exception
    then
    if processor allows you to re-supply entire reply
    then
    enter all zeros (exactly enough to satisfy
    original INPUT request)
    if processor responds that test passed
    then
    test passed
    else (no pass message after entering zeros)
    zeros not assigned to variables
    test failed (recovery procedure not followed)
    endif
    else (not allowed to re-supply entire reply)
    supply any non-zero reply to bypass this case
    test failed (recovery procedure not followed)
    endif
    else (no exception report)
    if documentation for processor correctly describes
    syntactic enhancement to accept the reply
    then
    test passed
    else (no exception, and incorrect/missing documentation)
    test failed
    endif
    endif

    Figure 3. Instructions for the INPUT exceptions test


    5.11.3 Errors
    -------------

    There is only one error program, and it tests the effect of a null
    entry in
    the input-list. The usual rules for error tests apply (see Section
    4.4.2.3).


    5.12 Implementation-supplied functions
    --------------------------------------

    All conforming implementations must make available to the programmer
    the set
    of functions defined in Section 8 of the ANSI standard. The purpose
    of this
    group is to assure that these functions have actually been
    implemented, and
    also to measure at least roughly the quality of implementation.


    5.12.1 Precise functions: ABS, INT, SGN
    ---------------------------------------

    These three functions are distinguished among the eleven supplied
    functions,
    in that any reasonable implementation should return a precise value
    for them.
    Therefore, they can be tested in a more stringent manner than the
    other eight,
    which are inherently approximate (i.e., a discrete machine cannot
    possibly
    supply an exact answer for most arguments).

    The structure of the tests is simple: the function under test is
    invoked with
    a variety of argument values, and the returned value is compared
    to the
    correct result. If all results are equal, the test passes;
    otherwise, it
    fails. The values are displayed for your inspection, and the tests
    are self-
    checking. The test for the INT function has a second section which
    does an
    informative test on the values returned for large arguments
    requiring more
    than six digits of accuracy.


    5.12.2 Approximated functions: SQR, ATN, COS, EXP, LOG, SIN, TAN
    ----------------------------------------------------------------

    These functions do not, typically, return rational values for
    rational
    arguments, and thus may only be approximated by digital
    computers.
    Furthermore, the standard explicitly disavows any criterion of
    accuracy,
    making it difficult to say when an implementation has definitely
    failed a
    test. Because of these constraints, the non-exception tests in this
    group are
    informative only. We can, however, quite easily apply the ideas
    developed
    earlier in Section 5.6.4. As explained there, we can devise an
    accuracy
    criterion for the implementation of a function, based on a
    hypothetical six
    decimal digit machine. If a function returns a value less accurate
    even than
    that of which this worst-case machine is capable, the informative test
    fails.

    To repeat the earlier guidance for the numeric operations: this
    approach
    imposes only a very minimal requirement. You may well want to set a
    stricter
    standard for the implementation under test. For this reason, the
    programs in
    this group also compute and report an error measure, which gives an
    estimate
    of the degree of accuracy achieved, again relative to a six-digit
    machine. The
    error measure thus goes beyond a simple pass/fail report, and
    quantifies how
    well or poorly the function value was computed. Of course, the error
    measure
    itself is subject to inaccuracy in its on internal computation, and
    no one
    measurement should be taken as precisely correct. Nonetheless, when
    the error
    measures of all the cases are considered in the aggregate, it should
    give a
    good overall picture of the quality of function evaluation. Since it
    is based
    on the same allowed interval for values as the pass/fail criterion,
    it too
    measures the quality of function evaluation independent of the
    function and
    argument under test. It does depend on the internal accuracy with
    which the
    implementation can represent numeric quantities: the greater the
    accuracy, the
    smaller the error measure should become. As a rough guide, the error
    measures
    should all be < 10^(6-d), where d is the number of significant
    decimal digits
    supported by the implementation (this is determined in the standard
    tests for
    numeric operations, group 6.1). For instance, an eight decimal digit
    processor
    should have all error measures < .01.

    Another point to be stressed: even though the results of these
    tests are
    informative, THE TESTS THEMSELVES ARE SYNTACTICALLY STANDARD, AND THUS
    MUST BE
    ACCEPTED AND PROCESSED BY THE IMPLEMENTATION. If, for instance, the
    processor
    does not recognize the ATN function and rejects the program, it
    definitely
    fails to conform to the standard. This is in contrast to the
    case of a
    processor which accepts the program, but returns somewhat inaccurate
    values.
    The latter processor is arguably standard-conforming, even if of low
    quality.

    This group also contains exception tests for those conditions so
    specified in
    the ANSI standard. Most of these can be understood in light of the
    general
    guidance given for exceptions. The program for overflow of the TAN
    function
    deserves some comment. Since it is questionable whether overflow can
    be forced
    simply by encoding PI/2 as a numeric constant for the source code
    argument,
    the program attempts to generate the exception by a convergence
    algorithm. It
    may be, however, that no argument exists which will cause overflow,
    so you
    must verify merely that, if overflow occurs, then it is
    reported as an
    exception. For instance, if several of the function calls return
    machine
    infinity, it is clear that overflow has occurred and, if there
    were no
    exception report in such a case, the test fails. Also, as a
    measure of
    quality, the returned values with a given sign should increase in
    magnitude
    until overflow occurs, i.e., all the positive values should form an
    ascending
    sequence, and the negative values a descending sequence.


    5.12.3 RND and RANDOMIZE
    ------------------------

    Unlike the other functions, there is no single correct value to be
    returned by
    any individual reference to RND, but only the properties of an
    aggregation of
    returned values are specified. The standard says that these
    values are
    "uniformly distributed in the range 0 <= RND < 1". Also, Section 17
    specifies
    that, in the absence of the RANDOMIZE statement, RND will generate
    the same
    pseudo-random sequence for each execution of a program;
    conversely, each
    execution of RANDOMIZE "generates a new unpredictable starting point"
    for the
    sequence produced by RND. The RND tests follow closely the strategy
    put forth
    in Chapter 3.3.1 of Don Knuth's "The Art of Computer Programming,
    Volume 2"
    [Ref.4], which explains fully the rationale for the programs in this
    group.


    5.12.3.1 Standard capabilities
    ------------------------------

    The first two programs test that the same sequence, or a novel
    sequence,
    appear as appropriate, depending on whether RANDOMIZE has executed.
    Note that
    you must execute both of these programs three times apiece, since
    the RND
    sequence is initialized by the implementation only when execution
    begins. The
    next three programs all test properties of the sequence, which follow
    directly
    from the specification that it is uniformly distributed in the range 0
    <= RND
    < 1. If the results make it quite improbable that the distribution is
    uniform,
    or if any value returned is outside the legal range, then the test
    fails. Of
    course, any implementation could pass simply by adjusting the RND
    algorithm or
    starting point until a passing sequence is generated. In order to
    measure the
    quality of implementation, you can run the programs with a RANDOMIZE
    statement
    in the beginning, and then observe how often the test passes or
    fails. Note
    that, if you use RANDOMIZE, these programs should fail a certain
    proportion of
    the time, since they are probabilistic tests.


    5.12.3.2 Informative tests
    --------------------------

    There are several desirable properties of a sequence of pseudo-random
    numbers
    which are not strictly implied by uniform distribution. If, for
    instance, the
    numbers in the sequence alternated between <= .5 and > .5, they might
    still be
    uniform, but would be non-random in an important way. These tests
    attempt to
    measure how well the implementation has approached the ideal of a
    perfectly
    random sequence by looking for patterns indicative of non-randomness
    in the
    sequence actually produced. Like the tests for standard
    capabilities, these
    programs are probabilistic, and any one of them may fail without
    necessarily
    implying that the RND sequence is not random. If a high-quality RND
    function
    is important for your purposes, we suggest you run each of these
    programs
    several times with the RANDOMIZE statement. If a given test seems to
    fail far
    more often than likely, it may well indicate a weakness in the RND
    algorithm.


    5.12.4 Errors
    -------------

    The tests in this group all use an argument-list which is incorrect
    in some
    way, either for the particular function, or because of the general
    rules of
    syntax. As always, if the processor does accept any of them, the
    documentation
    must be consistent with the actual results. Note that the ANSI
    standard
    contains a misprint, indicating that the TAN function takes no
    arguments. The
    tests are written to treat TAN as a function of a single variable.


    5.13 User-defined functions
    ---------------------------

    The standard provides a facility, so that programmers can define
    functions of
    a single variable in the form of a numeric expression. This group
    of tests
    exercises both the invoking mechanism (function references) and the
    defining
    mechanism (DEF statement).


    5.13.1 Standard capabilities
    ----------------------------

    These programs test a variety of properties guaranteed by the
    standard: the
    DEF statement must allow any numeric expression as the function
    definition;
    the parameter, if any, must not be confused with a global variable of
    the same
    name; global variables, other than one with the same name as the
    parameter,
    are available to the function definition; a DEF statement in the
    path of
    execution has no effect; invocation of a function as such never
    changes the
    value of any variable; the set of valid names for user-defined
    functions is
    "FN" followed by any alphabetic character. The tests are self-
    checking. As
    with the numeric operations, a very loose criterion of accuracy is
    used to
    check the implementation. Its purpose is not to check accuracy as
    such, but
    only to assure that the semantic behavior accords with the standard.


    5.13.2 Errors
    -------------

    Many of these tests are similar to the error tests for implementation-
    supplied
    functions, in that they try out various malformed argument lists.
    There are
    also some tests involving the DEF statement, in particular
    for the
    requirements that a program contain exactly one DEF statement for
    each user
    function referred to in the program, and that the definition
    precede any
    references.


    5.14 Numeric expressions
    ------------------------

    Numeric expressions have a somewhat special place in the
    Minimal BASIC
    standard. They are the most complex entity, syntactically, for two
    reasons.
    First, the expression itself may be built up in a variety of ways.
    Numeric
    constants, variables, and function references are combined using any
    of five
    operations. The function references themselves may be to user-
    defined
    expressions. And, of course, expressions can be nested, either
    implicitly, or
    explicitly with parentheses. Second, not only do the expressions
    have a
    complex internal syntax, but also they may appear in a number
    of quite
    different contexts. Not just the LET statement, but also the IF,
    PRINT, ON-
    GOTO, and FOR statements, can contain expressions. Also, they may be
    used as
    array subscripts, or as arguments in a function reference. Note
    that, when
    they are used in the ON-GOTO statement, as subscripts, or as arguments
    to TAB,
    expressions must be rounded to the nearest integer.

    The overall strategy of the test system is first to assure that the
    elements
    of numeric expressions are handled correctly, then to try out
    increasingly
    complex expressions in the comparatively simple context of the LET
    statement,
    and finally to verify that these complex expressions work properly
    in the
    other contexts mentioned. Preceding groups have already accomplished
    the first
    task of checking out individual expression elements, such as
    constants,
    variables (both simple and array), and function references.
    This group
    completes the latter two steps.


    5.14.1 Standard capabilities in context of LET-statement
    --------------------------------------------------------

    This test tries out various lengthy expressions, using the full
    generality
    allowed by the standard, and assigns the resulting value to a
    variable. As
    usual, if this value is even approximately correct, the test passes,
    since we
    are interested in semantics rather than accuracy. The program
    displays the
    correct value and actual computed value. This test also
    verifies that
    subscript expressions evaluate to the nearest integer.


    5.14.2 Expressions in other contexts: PRINT, IF, ON-GOTO, FOR
    -------------------------------------------------------------

    Please note that the PRINT test, like other PRINT tests, is
    inherently
    incapable of checking itself and, therefore, you must inspect and
    interpret
    the results. The PRINT program first tests the use of expressions
    as print-
    items. Check that the actual and correct values are reasonably
    close. The
    second section of the program tests that the TAB call is handled
    correctly.
    Simply verify that the characters appear in the appropriate columns.

    The second program is self-checking, and tests IF, ON-GOTO, and FOR,
    one in
    each section. As with other tests of control statements, the
    diagnostics are
    rather sparse for failures. Check Volume 2 for an example of correct
    output.


    5.14.3 Exceptions in subscripts and arguments
    ---------------------------------------------

    The exceptions specified in Section 7 and 8 apply to numeric
    expressions in
    whatever context they occur. These tests simply assure that the
    correct values
    are supplied, e.g., machine infinity for overflow, zero for
    underflow, and
    that the execution continues normally as if that value had been put
    in that
    context as, say, a numeric constant. Sometimes, this action will
    produce
    normal results, and sometimes will triger another exception, e.g.,
    machine
    infinity supplied as a subscript. Simply verify that the exception
    reports are
    produced as specified in the individual tests.


    5.14.4 Exceptions in other contexts: PRINT, IF, ON-GOTO, FOR
    ------------------------------------------------------------

    As in the immediately preceding section, these tests make sure
    that the
    recovery procedures have the natural effect, given the context in
    which they
    occur. As usual for exception tests, it is up to you to verify that
    reasonable
    exception reports appear. The PRINT tests also require user
    interpretation, to
    some degree.


    5.15 Miscellaneous checks
    -------------------------

    This group consists mostly of error tests in which the error is tied
    not to
    some specific functional area, but rather to the general format
    rules for
    BASIC programs. If you are not already thoroughly familiar with the
    general
    criteria for error tests, it would be wise to review them (Sections
    3.2.2 and
    4.4.2.3 of this document) before going through this group. A few tests
    require
    special comment, and this is supplied below in the appropriate
    subsection.


    5.15.1 Missing keyword
    ----------------------

    Many implementations of BASIC allow programs to omit the keyword
    LET in
    assignment statements. This program checks that possibility, and
    reports the
    resulting behavior if accepted.


    5.15.2 Spaces
    -------------

    Sections 3 and 4 of the ANSI standard specify several context
    sensitive rules
    for the occurrence of spaces in a BASIC program. The standard test
    assures
    that, wherever one space may occur, several spaces may occur with no
    effect,
    except within a quoted- or unquoted-string. There are certain
    places where
    spaces either must, or may, or may not appear, and the error programs
    test how
    the implementation treats various violations of the rules.


    5.15.3 Quotes
    -------------

    These programs test the effect of using either a single or double
    quote in a
    quoted string. Some processors may interpret the double quote as
    a single
    occurrence of the quote character within the string. The programs
    test the
    effect of aberrant quotes in the context of the PRINT and the LET
    statements.


    5.15.4 Line numbers
    -------------------

    The first of these programs is a standard, not an error, test. It
    verifies
    that leading zeros in line numbers have no effect. The other programs
    all deal
    with some violation of the syntax rules for line numbers. When
    submitting
    these programs to your implementation, you should not explicitly call
    for any
    sorting or renumbering of lines. If the implementation sorts the
    lines by
    default, even when the program is submitted to it in the simplest
    way, the
    documentation must make this clear. Such sorting merely
    constitutes a
    particular type of syntactic enhancement, i.e., to treat a program
    with lines
    out of order as if they were in order. Similarly, an
    implementation may
    discard duplicate lines, or append line numbers to the beginning
    of lines
    missing them, as long as these actions occur without special user
    intervention
    and are documented. Of course, processors may also reject such
    programs, with
    an error message to the user.


    5.15.5 Line longer than 72 characters
    -------------------------------------

    This program tests the implementation's reaction to a line whose
    length is
    greater than the standard limit of 72. Many implementations
    accept longer
    lines; if so, the documentation must specify the limit.


    5.15.6 Margin overflow for output line
    --------------------------------------

    This is not an error test, but a standard one. Further, it
    involves PRINT
    capabilities, and therefore calls for careful user interpretation. Its
    purpose
    is to assure correct handling of the margin and print zones, relative
    to the
    implementation-defined length for each of those two entities. After
    you have
    entered the appropriate values, the program will generate pairs of
    output,
    with either one or two printed lines for each member of the pair.
    The first
    member is produced using primitive capabilities of PRINT, and is
    intended to
    show what the output SHOULD look like. The second member of the
    pair is
    produced using the facilities under test, and shows what the output
    actually
    looks like. If the two members differ at all, the test fails. It could
    happen,
    however, that the first member of the pair does not produce the
    correct output
    either. You should, therefore, closely examine the sample output for
    this test
    in Volume 2 to understand what the expected output is. Of course,
    the sample
    is exactly correct only for implementations with the same margin
    and zone
    width, but allowing for the possibly different widths of your
    processor, the
    sample should give you the idea of what your processor must do.


    5.15.7 Lowercase characters
    ---------------------------

    These two tests tell you whether your processor can handle
    lowercase
    characters in the program, and, if so, whether they are converted to
    uppercase
    or left as lowercase.


    5.15.8 Ordering strings
    -----------------------

    This program tests whether your implementation accepts comparison
    operators
    other than the standard = or <> for strings. If the processor
    does accept
    them, the program assumes that the interpretation is the intuitively
    appealing
    one, and prints informative output concerning the implicit character
    collating
    sequence, and also some comparison results for multi-character
    strings.


    5.15.9 Mismatch of types in assignment
    --------------------------------------

    These programs check whether the processor accepts assignment of a
    string to a
    numeric variable and vice-versa, and, if so, what the resulting value
    of the
    receiving variable is. As usual, make sure your documentation
    covers these
    cases if the implementation accepts these programs.


    Section 6: Tables of summary information about the Test Programs
    ----------------------------------------------------------------

    This section contains three tables which should help you find your
    way around
    the programs and the ANSI standard. The first table presents the
    functional
    grouping of the tests, and shows which programs are in each group
    and the
    sections of the ANSI standard whose specifications are being tested
    thereby.
    The second table lists all the programs individually by number and
    title, and
    also the particular sections and subsections of the standard to
    which they
    apply. The third table lists the sections and subsections of the
    standard in
    order, followed by a list of program numbers for those sections.
    This third
    table is especially important if you want to test the implementation
    of only
    certain parts of the standard. Be aware, however, that, since the
    sections of
    the standard are NOT tested in order, the tests for a given section
    may rely
    on the implementation of later sections in the standard, which
    have been
    tested earlier in the test sequence.


    6.1 Group structure of the Minimal BASIC Test Programs
    ------------------------------------------------------

    Format: Group
    Program number
    ANSI section


    1 Simple PRINTing of string constants
    1
    3, 5, 12

    2 END and STOP
    2-5
    4, 10

    2.1 END
    2-4
    4

    2.2 STOP
    5
    10

    3 PRINTing and simple assignment (LET)
    6-14
    5, 6, 9, 12

    3.1 String variables and TAB
    6-8
    6, 9, 12

    3.2 Numeric constants and variables
    9-14
    5, 6, 9, 12

    4 Control statements and REM
    15-21
    10, 18

    4.1 REM and GOTO
    15-16
    10, 18

    4.2 GOSUB and RETURN
    17
    10

    4.3 IF-THEN
    18-21
    10

    5 Variables
    22-23
    6

    6 Numeric constants, variables, and operations
    24-43
    5, 6, 7

    6.1 Standard capabilities
    24-27
    5, 6, 7

    6.2 Exceptions
    28-35
    5, 7

    6.3 Errors
    36-38
    7

    6.4 Accuracy tests -- informative
    39-43
    7

    7 FOR-NEXT
    44-55
    10, 11

    7.1 Standard capabilities
    44-49
    10, 11

    7.2 Errors
    50-55
    11

    8 Arrays
    56-84
    6, 7, 9, 15

    8.1 Standard capabilities
    56-62
    6, 7, 9, 15

    8.2 Exceptions
    63-72
    6, 15

    8.3 Errors
    73-84
    6, 15

    9 Control statements
    85-91
    10

    9.1 GOSUB and RETURN
    85-87
    10

    9.2 ON-GOTO
    88-91
    10

    10 READ, DATA, and RESTORE
    92-106
    3, 5, 14

    10.1 Standard capabilities
    92-95
    3, 5, 14

    10.2 Exceptions
    96-101
    14

    10.3 Errors
    102-106
    3, 14

    11 INPUT
    107-113
    3, 5, 13

    11.1 Standard capabilities
    107-110
    3, 5, 13

    11.2 Exceptions
    111-112
    3, 5, 13

    11.3 Errors
    113
    13

    12 Implementation-supplied functions
    114-150
    7, 8, 17

    12.1 Precise functions: ABS, INT, SGN
    114-116
    8

    12.2 Approximated functions: SQR, ATN, COS, EXP, LOG, SIN, TAN
    117-129
    7, 8

    12.3 RND and RANDOMIZE
    130-142
    8, 17

    12.3.1 Standard capabilities
    130-134
    8, 17

    12.3.2 Informative tests
    135-142
    8

    12.4 Errors
    143-150
    7, 8

    13 User-defined functions
    151-163
    7, 16

    13.1 Standard capabilities
    151-152
    7, 16

    13.2 Errors
    153-163
    7, 16

    14 Numeric expressions
    164-184
    6, 7, 8, 10, 11, 12, 16

    14.1 Standard capabilities in context of LET-statement
    164
    6, 7, 8, 16

    14.2 Expressions in other contexts: PRINT, IF, ON-GOTO, FOR
    165-166
    7, 10, 11, 12

    14.3 Exceptions in subscripts and arguments
    167-171
    6, 7, 8, 16

    14.4 Exceptions in other contexts: PRINT, IF, ON-GOTO, FOR
    172-184
    7, 10, 11, 12

    15 Miscellaneous checks
    185-208
    3, 4, 9, 10, 12

    15.1 Missing keyword
    185
    9

    15.2 Spaces
    186-191
    3, 4

    15.3 Quotes
    192-195
    3, 9, 12

    15.4 Line numbers
    196-201
    4

    15.5 Line longer than 72 characters
    202
    4

    15.6 Effect of zones and margin on PRINT
    203
    12

    15.7 Lowercase characters
    204-205
    3, 9, 12

    15.8 Ordering relations between strings
    206
    3, 10

    15.9 Mismatch of types in assignment
    207-208
    9


    6.2 Test program sequence
    -------------------------

    Program number 1
    Null PRINT and PRINTing quoted strings.
    Refs: 3.2 3.4 5.2 5.4 12.2 12.4

    Program number 2
    The END-statement.
    Refs: 4.2 4.4

    Program number 3
    Error -- misplaced END-statement.
    Refs: 4.2 4.4

    Program number 4
    Error -- missing END-statement.
    Refs: 4.2 4.4

    Program number 5
    The STOP-statement.
    Refs: 10.2 10.4

    Program number 6
    PRINT-separators, TABs, and string variables.
    Refs: 6.2 6.4 9.2 9.4 12.2 12.4

    Program number 7
    Exception -- string overflow using the LET-statement.
    Refs: 9.5 12.4

    Program number 8
    Exception -- TAB argument less than one.
    Refs: 12.5

    Program number 9
    Printing nr1 and nr2 numeric constants.
    Refs: 5.2 5.4 12.4

    Program number 10
    Printing nr3 numeric constants.
    Refs: 5.2 5.4 12.4

    Program number 11
    Printing numeric variables assigned nr1 and nr2 constants.
    Refs: 5.2 5.4 6.2 6.4 9.2 9.4 12.4

    Program number 12
    Printing numeric variables assigned nr3 constants.
    Refs: 5.2 5.4 6.2 6.4 9.2 9.4 12.4

    Program number 13
    Format and rounding of printed numeric constants.
    Refs: 12.4 5.2 5.4

    Program number 14
    Printing and assigning numeric values near to the maximum and
    minimum
    magnitude.
    Refs: 5.4 9.4 12.4

    Program number 15
    The REM and GOTO statements.
    Refs: 18.2 18.4 10.2 10.4

    Program number 16
    Error -- transfer to a non-existing line number using the GOTO-
    statement.
    Refs: 10.4

    Program number 17
    Elementary use of GOSUB and RETURN.
    Refs: 10.2 10.4

    Program number 18
    The IF-THEN statement with string operands.
    Refs: 10.2 10.4

    Program number 19
    The IF-THEN statement with numeric operands.
    Refs: 10.2 10.4

    Program number 10
    Error -- IF-THEN statement with a string and numeric operand.
    Refs: 10.2

    Program number 21
    Error -- transfer to non-existing line number using the IF-THEN-
    statement.
    Refs: 10.4

    Program number 22
    Numeric and string variable names with the same initial letter.
    Refs: 6.2 6.4

    Program number 23
    Initialization of string and numeric variables.
    Refs: 6.6

    Program number 24
    Plus and minus.
    Refs: 7.2 7.4

    Program number 25
    Multiply, divide, and involute (exponentiate).
    Refs: 7.2 7.4

    Program number 26
    Precedence rules for numeric expressions.
    Refs: 7.2 7.4

    Program number 27
    Accuracy of constants and variables.
    Refs: 5.2 5.4 6.2 6.4 10.4

    Program number 28
    Exception -- division by zero.
    Refs: 7.5

    Program number 29
    Exception -- overflow of numeric expressions.
    Refs: 7.5

    Program number 30
    Exception -- overflow of numeric constants.
    Refs: 5.4 5.5

    Program number 31
    Exception -- zero raised to a negative power.
    Refs: 7.5

    Program number 32
    Exception -- negative quantity raised to a non-integral power.
    Refs: 7.5

    Program number 33
    Exception -- underflow of numeric expressions.
    Refs: 7.4

    Program number 34
    Exception -- underflow of numeric constants.
    Refs: 5.4 5.6

    Program number 35
    Exception -- overflow and underflow within sub-expressions.
    Refs: 7.4 7.5

    Program number 36
    Error -- unmatched parentheses in numeric expression.
    Refs: 7.2

    Program number 37
    Error -- use of '**' as operator.
    Refs: 7.2

    Program number 38
    Error -- use of adjacent operators.
    Refs: 7.2

    Program number 39
    Accuracy of addition.
    Refs: 7.2 7.4 7.6

    Program number 40
    Accuracy of subtraction.
    Refs: 7.2 7.4 7.6

    Program number 41
    Accuracy of multiplication.
    Refs: 7.2 7.4 7.6

    Program number 42
    Accuracy of division.
    Refs: 7.2 7.4 7.6

    Program number 43
    Accuracy of involution (exponentiation).
    Refs: 7.2 7.4 7.6

    Program number 44
    Elementary use of the FOR-statement.
    Refs: 11.2 11.4

    Program number 45
    Altering the control-variable within a FOR-block.
    Refs: 11.2 11.4

    Program number 46
    Interaction of control statements with the FOR-statement.
    Refs: 11.2 11.4 10.2 10.4

    Program number 47
    Increment in the STEP clause of the FOR-statement defaults to a value
    of one.
    Refs: 11.2 11.4

    Program number 48
    Limit and increment in the FOR-statement are evaluated once upon
    entering the
    loop.
    Refs: 11.2 11.4

    Program number 49
    Nested FOR-blocks.
    Refs: 11.2 11.4

    Program number 50
    Error -- FOR-statement without a matching NEXT-statement.
    Refs: 11.2 11.4

    Program number 51
    Error -- NEXT-statement without a matching FOR-statement.
    Refs: 11.2 11.4

    Program number 52
    Error -- mismatched control-variables on FOR-statement and NEXT-
    statement.
    Refs: 11.4

    Program number 53
    Error -- interleaved FOR-blocks.
    Refs: 11.4

    Program number 54
    Error -- nested FOR-blocks with the same control variable.
    Refs: 11.4

    Program number 55
    Error -- jump into FOR-block.
    Refs: 11.4

    Program number 56
    Array assignment without the OPTION-statement.
    Refs: 6.2 6.4 9.2 9.4 15.2 15.4

    Program number 57
    Array assignment with OPTION BASE 0.
    Refs: 6.2 6.4 9.2 9.4 15.2 15.4

    Program number 58
    Array assignment with OPTION BASE 1.
    Refs: 6.2 6.4 9.2 9.4 15.2 15.4

    Program number 59
    Array named 'A' is distinct from 'A$'.
    Refs: 6.2 6.4

    Program number 60
    Numeric constants used as subscripts are rounded to nearest integer.
    Refs: 6.4 5.4

    Program number 61
    Numeric expressions containing subscripted variables.
    Refs: 6.2 6.4 7.2 7.4

    Program number 62
    General syntactic and semantic properties of array control
    statements: OPTION
    and DIM.
    Refs: 15.2 15.4

    Program number 63
    Exception -- subscript too large for one-dimensional array.
    Refs: 6.5

    Program number 64
    Exception -- subscript too small for two-dimensional array.
    Refs: 6.5

    Program number 65
    Exception -- subscript too small for one-dimensional array, with DIM.
    Refs: 6.5 15.2 15.4

    Program number 66
    Exception -- subscript too large for two-dimensional array, with DIM.
    Refs: 6.5 15.2 15.4

    Program number 67
    Exception -- subscript too small for one-dimensional array, with
    OPTION BASE
    1.
    Refs: 6.5 15.2 15.4

    Program number 68
    Exception -- subscript too large for one-dimensional array, with
    DIM and
    OPTION BASE 1.
    Refs: 6.5 15.2 15.4

    Program number 69
    Exception -- subscript too large for two-dimensional array, with
    DIM and
    OPTION BASE 0.
    Refs: 6.5 15.2 15.4

    Program number 70
    Exception -- subscript too small for one-dimensional array, with
    OPTION BASE
    0.
    Refs: 6.5 15.2 15.4

    Program number 71
    Exception -- subscript too small for two-dimensional array, with
    DIM and
    OPTION BASE 0.
    Refs: 6.5 15.2 15.4

    Program number 72
    Exception -- subscript too small for two-dimensional array, with
    DIM and
    OPTION BASE 1.
    Refs: 6.5 15.2 15.4

    Program number 73
    Error -- DIM sets upper bound of zero with OPTION BASE 1.
    Refs: 15.4

    Program number 74
    Error -- DIM sets array to one dimension, and reference is made
    to two-
    dimensional variable of same name.
    Refs: 15.4 6.4

    Program number 75
    Error -- DIM sets array to one dimension, and reference is made
    to simple
    variable of same name.
    Refs: 15.4 6.4

    Program number 76
    Error -- DIM sets array to two dimensions, and reference is made
    to one-
    dimensional variable of same name.
    Refs: 15.4 6.4

    Program number 77
    Error -- reference to array and simple variable of same name.
    Refs: 6.4

    Program number 78
    Error -- reference to one-dimensional and two-dimensional variable
    of same
    name.
    Refs: 6.4

    Program number 79
    Error -- reference to array with letter-digit name.
    Refs: 6.2

    Program number 80
    Error -- multiple OPTION statements.
    Refs: 15.4

    Program number 81
    Error -- DIM-statement precedes OPTION-statement.
    Refs: 15.4

    Program number 82
    Error -- array-reference precedes OPTION-statement.
    Refs: 15.4

    Program number 83
    Error -- array-reference precedes DIM-statement.
    Refs: 15.4

    Program number 84
    Error -- DIMensioning the same array more than once.
    Refs: 15.4

    Program number 85
    General capabilities of GOSUB/RETURN.
    Refs: 10.4

    Program number 86
    Exception -- RETURN without GOSUB.
    Refs: 10.5

    Program number 87
    Error -- transfer to non-existing line number, using the GOSUB-
    statement.
    Refs: 10.4

    Program number 88
    The ON-GOTO-statement.
    Refs: 10.2 10.4

    Program number 89
    Exception -- ON-GOTO control expression less than 1.
    Refs: 10.5

    Program number 90
    Exception -- ON-GOTO control expression greater than number of line-
    numbers in
    list.
    Refs: 10.5

    Program number 91
    Error -- transfer to non-existing line number using the ON-GOTO-
    statement.
    Refs: 10.4

    Program number 92
    READ and DATA statements for numeric data.
    Refs: 5.2 14.2 14.4

    Program number 93
    READ and DATA statements for string data.
    Refs: 3.2 5.2 14.2 14.4

    Program number 94
    Reading DATA into subscripted variables.
    Refs: 14.2 14.4

    Program number 95
    General use of the READ, DATA, and RESTORE statements.
    Refs: 14.2 14.4

    Program number 96
    Exception -- numeric underflow when READing DATA causes replacement by
    zero.
    Refs: 5.5 14.4

    Program number 97
    Exception -- insufficient DATA for READ.
    Refs: 14.5

    Program number 98
    Exception -- READing unquoted string DATA into a numeric variable.
    Refs: 14.5

    Program number 99
    Exception -- READing quoted string DATA into a numeric variable.
    Refs: 14.5

    Program number 100
    Exception -- string overflow on READ.
    Refs: 14.5

    Program number 101
    Exception -- numeric overflow on READ.
    Refs: 14.5

    Program number 102
    Error -- illegal character in unquoted string in DATA statement.
    Refs: 3.2 14.2

    Program number 103
    Error -- READing quoted strings containing single quote.
    Refs: 3.2 14.2

    Program number 104
    Error -- READing quoted strings containing double quote.
    Refs: 3.2 14.2

    Program number 105
    Error -- null datum in DATA-list.
    Refs: 14.2

    Program number 106
    Error -- null entry in READ's variable-list.
    Refs: 14.2

    Program number 107
    INPUT of numeric constants.
    Refs: 5.2 13.2 13.4

    Program number 108
    INPUT to subscripted variables.
    Refs: 13.2 13.4

    Program number 109
    String INPUT.
    Refs: 3.2 13.2 13.4

    Program number 110
    Mixed INPUT of strings and numbers.
    Refs: 13.2 13.4

    Program number 111
    Exception -- numeric underflow on INPUT causes replacement by zero.
    Refs: 5.6 13.4

    Program number 112
    Exception -- INPUT-reply inconsistent with INPUT variable-list.
    Refs: 13.4 13.5 3.2 5.2

    Program number 113
    Error -- null entry in INPUT-list.
    Refs: 13.2

    Program number 114
    Evaluation of ABS function.
    Refs: 8.4

    Program number 115
    Evaluation of INT function.
    Refs: 8.4

    Program number 116
    Evaluation of SGN function.
    Refs: 8.4

    Program number 117
    Accuracy of SQR function.
    Refs: 7.6 8.4

    Program number 118
    Exception -- SQR of negative argument.
    Refs: 8.5

    Program number 119
    Accuracy of ATN function.
    Refs: 7.6 8.4

    Program number 120
    Accuracy of COS function.
    Refs: 7.6 8.4

    Program number 121
    Accuracy of EXP function.
    Refs: 7.6 8.4

    Program number 122
    Exception -- overflow on value of EXP function.
    Refs: 8.5

    Program number 123
    Exception -- underflow on value of EXP function.
    Refs: 8.4 8.6

    Program number 124
    Accuracy of LOG function.
    Refs: 7.6 8.4

    Program number 125
    Exception -- LOG of zero argument.
    Refs: 8.5

    Program number 126
    Exception -- LOG of negative argument.
    Refs: 8.5

    Program number 127
    Accuracy of SIN function.
    refs: 7.6 8.4

    Program number 128
    Accuracy of TAN function.
    Refs: 7.6 8.4

    Program number 129
    Exception -- overflow on value of TAN function.
    Refs: 8.5

    Program number 130
    RND function without RANDOMIZE statement.
    Refs: 8.2 8.4

    Program number 131
    RND function with the RANDOMIZE statement.
    Refs: 8.2 8.4 17.2 17.4

    Program number 132
    Average of random numbers approximates 0.5 and 0 <= RND < 1.
    Refs: 8.4

    Program number 133
    Chi-square uniformity test for RND function.
    Refs: 8.4

    Program number 134
    Komolgorov-Smirnov uniformity test for RND function.
    Refs: 8.4

    Program number 135
    Serial test for randomness.
    Refs: 8.4

    Program number 136
    Gap test for RND function.
    Refs: 8.4

    Program number 137
    Poker test for RND function.
    Refs: 8.4

    Program number 138
    Coupon collector test of RND function.
    Refs: 8.4

    Program number 139
    Permutation test for the RND function.
    Refs: 8.4

    Program number 140
    Runs test for the RND function.
    Refs: 8.4

    Program number 141
    Maximum of group test of RND function.
    Refs: 8.4

    Program number 142
    Serial correlation test of RND function.
    Refs: 8.4

    Program number 143
    Error -- two arguments in list for SIN function.
    Refs: 7.2 7.4 8.2 8.4

    Program number 144
    Error -- two arguments in list for ATN function.
    Refs: 7.2 7.4 8.2 8.4

    Program number 145
    Error -- two arguments in list for RND function.
    Refs: 7.2 7.4 8.2 8.4

    Program number 146
    Error -- one argument in list for RND function.
    Refs: 7.2 7.4 8.2 8.4

    Program number 147
    Error -- null argument-list for INT function.
    Refs: 7.2 7.4 8.2 8.4

    Program number 148
    Error -- missing argument list for TAN function.
    Refs: 7.2 7.4 8.2 8.4

    Program number 149
    Error -- null argument-list for RND function.
    Refs: 7.2 7.4 8.2 8.4

    Program number 150
    Error -- using a string as an argument for an implementation-
    supplied
    function.
    Refs: 7.2 7.4 8.2 8.4

    Program number 151
    User-defined functions.
    Refs: 16.2 16.4 7.2 7.4

    Program number 152
    Valid names for user-defined functions.
    Refs: 16.2

    Program number 153
    Error -- superfluous argument-list for user-defined function.
    Refs: 16.4

    Program number 154
    Error -- missing argument-list for user-defined function.
    Refs: 16.4

    Program number 155
    Error -- null argument-list for user-defined function.
    Refs: 7.2 7.4 16.2 16.4

    Program number 156
    Error -- excess argument in list for user-defined function.
    Refs: 16.4

    Program number 157
    Error -- user-defined function with two parameters.
    Refs: 16.2 16.4 7.2 7.4

    Program number 158
    Error -- using a string as an argument for a user-defined function.
    Refs: 7.2 7.4 16.2 16.4

    Program number 159
    Error -- using a string as an argument and parameter for a user-
    defined
    function.
    Refs: 7.2 7.4 16.2 16.4

    Program number 160
    Error -- function defined more than once.
    Refs: 16.4

    Program number 161
    Error -- referencing a function inside its own definition.
    Refs: 16.4

    Program number 162
    Error -- reference to function precedes its definition.
    Refs: 16.4

    Program number 163
    Error -- reference to an undefined function.
    Refs: 16.4

    Program number 164
    General use of numeric expressions in LET-statement.
    Refs: 6.2 6.4 7.2 7.4 8.2 8.4 16.2 16.4

    Program number 165
    Compound expressions and PRINT.
    Refs: 7.2 7.4 12.2 12.4

    Program number 166
    Compound expressions used with control statements and FOR-statements.
    Refs: 7.2 7.4 10.2 10.4 11.2 11.4

    Program number 167
    Exception -- evaluation of numeric expressions acting as function
    arguments.
    Refs: 7.5 8.4 16.4

    Program number 168
    Exception -- overflow in the subscript of an array.
    Refs: 6.4 6.5 7.5

    Program number 169
    Exception -- numeric underflow in the evaluation of numeric
    expressions acting
    as arguments and subscripts.
    Refs: 6.4 7.4 7.6 8.4

    Program number 170
    Exception -- negative quantity raised to a non-integral power in a
    subscript.
    Refs: 7.5 6.2

    Program number 171
    Exception -- LOG of a negative quantity in an argument.
    Refs: 8.5 16.2

    Program number 172
    Exception -- SQR of negative quantity in PRINT-item.
    Refs: 8.5 12.2

    Program number 173
    Exception -- negative quantity raised to a non-integral power in TAB-
    item.
    Refs: 7.5 12.2

    Program number 174
    Exception -- evaluation of numeric expressions in the PRINT statement.
    Refs: 7.5 8.5 12.2

    Program number 175
    Exception -- underflow in the evaluation of numeric expressions in
    the PRINT
    statement.
    Refs: 7.4 7.6 8.6 12.2

    Program number 176
    Exception -- negative quantity raised to a non-integral power in IF-
    statement.
    Refs: 7.5 10.2

    Program number 177
    Exception -- evaluation of numeric expressions in the IF-statement.
    Refs: 7.5 10.2

    Program number 178
    Exception -- underflow in the evaluation of numeric expressions in
    the IF-
    statement.
    Refs: 7.4 7.6 10.2

    Program number 179
    Exception -- LOG of zero in ON-GOTO-statement.
    Refs: 8.5 10.2

    Program number 180
    Exception -- evaluation of numeric expressions in the ON-GOTO
    statement.
    Refs: 7.5 10.2 10.5

    Program number 181
    Exception -- underflow in the evaluation of the EXP function in the
    ON-GOTO
    statement.
    Refs: 7.4 8.6 10.2 10.5

    Program number 182
    Exception -- negative quantity raised to a non-integral power
    in FOR-
    statement.
    Refs: 7.5 11.2

    Program number 183
    Exception -- evaluation of numeric expressions in the FOR-statement.
    Refs: 7.5 11.2

    Program number 184
    Exception -- underflow in the evaluation of numeric expressions in
    the FOR-
    statement.
    Refs: 7.4 7.6 11.2

    Program number 185
    Error -- missing keyword LET.
    Refs: 9.2 9.4

    Program number 186
    Extra spaces have no effect.
    Refs: 3.4

    Program number 187
    Error -- spaces at the beginning of a line.
    Refs: 3.4 4.4

    Program number 188
    Error -- spaces within line-numbers.
    Refs: 3.4 4.4

    Program number 189
    Error -- spaces within keywords.
    Refs: 3.4

    Program number 190
    Error -- no spaces before keywords.
    Refs: 3.4

    Program number 191
    Error -- no spaces after keywords.
    Refs: 3.4

    Program number 192
    Error -- PRINT-item quoted strings containing single quote.
    Refs: 3.2 12.2 12.4

    Program number 193
    Error -- PRINT-item quoted strings containing double quotes.
    Refs: 3.2 12.2 12.4

    Program number 194
    Error -- assigned quoted strings containing single quote.
    Refs: 3.2 9.2

    Program number 195
    Error -- assigned quoted string containing double quotes.
    Refs: 3.2 9.2

    Program number 196
    Line-numbers with leading zeros.
    Refs: 4.2 4.4

    Program number 197
    Error -- duplicate line-numbers.
    Refs: 4.4

    Program number 198
    Error -- lines out of order.
    Refs: 4.4

    Program number 199
    Error -- five-digit line-numbers.
    Refs: 4.2

    Program number 200
    Error -- line-number zero.
    Refs: 4.4

    Program number 201
    Error -- statements without line-numbers.
    Refs: 4.2 4.4

    Program number 202
    Error -- lines longer than 72 characters.
    Refs: 4.4

    Program number 203
    Effect of zones and margin on PRINT.
    Refs: 12.4 12.2

    Program number 204
    Error -- PRINT-statements containing lowercase characters.
    Refs: 3.2 3.4 12.2

    Program number 205
    Error -- assigned string containing lowercase characters.
    Refs: 3.2 3.4 9.2

    Program number 206
    Error -- ordering relations between strings.
    Refs: 3.2 3.4 3.6 10.2

    Program number 207
    Error -- assignment of a string to a numeric variable.
    Refs: 9.2

    Program number 208
    Error -- assignment of a number to a string variable.
    Refs: 9.2


    6.3 Cross-reference between ANSI Standard and Test Programs
    -----------------------------------------------------------

    Section 3: Characters and strings
    ---------------------------------

    3.2: Syntax
    1 93 102 103 104 109 112 192 194 195 204 205 206

    3.4: Semantics
    1 186 187 188 189 190 191 204 205 206

    3.6: Remarks
    206


    Section 4: Programs
    -------------------

    4.2: Syntax
    2 3 4 196 199 201

    4.4: Semantics
    2 3 4 187 188 196 197 198 200 201 202


    Section 5: Constants
    --------------------

    5.2: Syntax
    1 9 10 11 12 13 27 92 93 107 112

    5.4: Semantics
    1 9 10 11 12 13 14 27 30 34 60

    5.5: Exceptions
    30

    5.6: Remarks
    34 96 111


    Section 6: Variables
    --------------------

    6.2: Syntax
    6 11 12 22 27 56 57 58 59 61 79 164 170

    6.4: Semantics
    6 11 12 22 27 56 57 58 59 60 61 74 75 76 77
    78 164
    168 169

    6.5: Exceptions
    63 64 65 66 67 68 69 70 71 72 168

    6.6: Remarks
    23


    Section 7: Expressions
    ----------------------

    7.2: Syntax
    24 25 26 36 37 38 39 40 41 42 43 61 143 144
    145 146
    147 148 149 150 151 155 157 158 159 164 165 166

    7.4: Semantics
    24 25 26 33 35 39 40 41 42 43 61 143 144 145
    146 147
    148 149 150 151 155 157 158 159 164 165 166 169
    175 178
    181 184

    7.5: Exceptions
    28 29 31 32 35 167 168 170 173 174 176 177 180
    182 183

    7.6: Remarks
    39 40 41 42 43 117 119 120 121 124 127 128 169
    175 178
    184


    Section 8: Implementation-supplied functions
    --------------------------------------------

    8.2: Syntax
    130 131 143 144 145 146 147 148 149 150 164

    8.4: Semantics
    114 115 116 117 119 120 121 123 124 127 128 130
    131 132
    133 134 135 136 137 138 139 140 141 142 143 144
    145 146
    147 148 149 150 164 167 169

    8.5: Exceptions
    118 122 125 126 129 171 172 174 179

    8.6: Remarks
    123 175 181


    Section 9: The LET-statement
    ----------------------------

    9.2: Syntax
    6 11 12 56 57 58 185 194 195 205 207 208

    9.4: Semantics
    6 11 12 14 56 57 58 185

    9.5: Exceptions
    7


    Section 10: Control statements
    ------------------------------

    10.2: Syntax
    5 15 17 18 19 20 46 88 166 176 177 178 179 180
    181 206

    10.4: Semantics
    5 15 16 17 18 19 21 27 46 85 87 88 91 166

    10.5: Exceptions
    86 89 90 180 181


    Section 11: FOR-statements and NEXT-statements
    ----------------------------------------------

    11.2: Syntax
    44 45 46 47 48 49 50 51 166 182 183 184

    11.4: Semantics
    44 45 46 47 48 49 50 51 52 53 54 55 166


    Section 12: The PRINT-statement
    -------------------------------

    12.2: Syntax
    1 6 165 172 173 174 175 192 193 203 204

    12.4: Semantics
    1 6 7 9 10 11 12 13 14 165 192 193 203

    12.5: Exceptions
    8


    Section 13: The INPUT-statement
    -------------------------------

    13.2: Syntax
    107 108 109 110 113

    13.4: Semantics
    107 108 109 110 111 112

    13.5: Exceptions
    112


    Section 14: The DATA, READ, and RESTORE statements
    --------------------------------------------------

    14.2: Syntax
    92 93 94 95 102 103 104 105 106

    14.4: Semantics
    92 93 94 95 96

    14.5: Exceptions
    97 98 99 100 101


    Section 15: Array declarations
    ------------------------------

    15.2: Syntax
    56 57 58 62 65 66 67 68 69 70 71 72

    15.4: Semantics
    56 57 58 62 65 66 67 68 69 70 71 72 73 74 75
    76 80 81
    82 83 84


    Section 16: User-defined functions
    ----------------------------------

    16.2: Syntax
    151 152 155 157 158 159 164 171

    16.4: Semantics
    151 153 154 155 156 157 158 159 160 161 162 163
    164 167


    Section 17: The RANDOMIZE statement
    -----------------------------------

    17.2: Syntax
    131

    17.4: Semantics
    131


    Section 18: The REMark statement
    --------------------------------

    18.2: Syntax
    15

    18.4: Semantics
    15


    Appendix A:
    Differences between Versions 1 and 2 of the Minimal BASIC Test
    Programs
    -----------------------------------------------------------------------

    In the development of Version 2, we introduced a wide variety of
    changes in
    the test system. Some were substantive, some stylistic. Below is a
    list of the
    more significant differences.

    1. Perhaps the most extensive change has to do with the more
    complete
    treatment of the errors and exceptions which must be
    detected and
    reported by a conforming processor. We have tried to make
    clear the
    distinction between the two, and just what conformance entails
    in each
    case. Also, Version 2 tests a wider variety of anomalous
    conditions
    for the processor to handle. It is in this area of helpful
    recovery
    from programmer mistakes that the Minimal BASIC standard
    imposes
    stricter requirements than other language standards, and
    the tests
    reflect this emphasis.

    2. Version 2 differs significantly from Version 1 in its
    treatment of
    accuracy requirements. We abandoned any attempt to COMPUTE
    internal
    accuracy for the purpose of judging conformance as
    being too
    vulnerable to the problems of circularity. Rather, we
    formulated a
    criterion of accuracy, and computed the required results
    OUTSIDE the
    program itself. The programs, therefore, generally contain
    only simple
    IF statements comparing constants or variables (no
    lengthy
    expressions). Those test sections where we did attempt some
    internal
    computation of accuracy, e.g., the error measure and
    computation of
    accuracy of constants and variables, are informative only.

    3. There are a number of new informative tests for the RND
    function.
    These are to help users whose applications are strongly
    dependent on a
    nearly patternless RND sequence.

    4. The overall structure of the test system is more explicit.
    The group
    numbering should help to explain why testing of certain
    sections of
    the ANSI standard had to precede others. Also, it should be
    easier to
    isolate the programs relevant to the testing of a given
    section, by
    referring to the group structure.

    5. We tried to be especially careful to keep the printed output
    of the
    various tests as consistent as their subject matter would
    allow. In
    particular, we always made sure that the programs
    stated, as
    explicitly as possible, what was necessary for the test to
    pass or
    fail, and that this message was surrounded by triple
    asterisks.


    References
    ----------

    1. "American National Standard for Minimal BASIC, X3.60-1978"
    American National Standards Institute, New York, New York, January
    1978.

    2. "A Candidate Standard for Fundamental BASIC"
    J.A. Lee
    NBS-GCR 73-17, National Bureau of Standards, Washington DC, July
    1973.

    3. "PBASIC -- A Verifier for BASIC"
    T.R. Hopkins
    "Software -- Practice and Experience", Vol.10, pp.175-181 (1980)

    4. "The Art of Computer Programming, Vol.2"
    D.E. Knuth
    Addison-Wesley Publishing Company, Reading, Massachusetts (1969)


    EOF


  3. Re: WANTED: Volunteer to Scan Old Programs

    The NBS publications Mr Roche posted about, are listed by title at the
    NIST Web site. (NIST is the US agency which now includes the former
    National Bureau of Standards). Here's a Web link which lists that and
    other documents - a few minutes of Web searching found that
    information. Someone interested can search the NIST Web site or
    contact NIST about obtaining copies of these. I did not see if they
    were available as photocopies or as PDF's, it's likely they are too
    old to be in PDF form.

    http://www.itl.nist.gov/lab/specpubs/sp500.htm

    These documents should be available in some way, from NIST or from a
    library which still retains these older documents in older NIST/NBS
    archives. Before the Web they were distributed in that fashion.

    It's also possible that, as a set of test programs, they are already
    archived somewhere on the Web. Due dilligence may find them, or some
    institution which has them. Mr. Roche kindly posted enough information
    to identify them.

    However, Mr. Roche has a long-standing habit of posting whole
    documents into comp.os.cpm, for "preservation" purposes. No doubt, if
    he obtained these 200 plus BASIC programs as files, he'd dump them
    into the comp.os.cpm Usenet newsgroup, rather than offer them through
    a Web archive. It's not my job to tell him or anyone else what to
    post. But let me put it this way: does anyone WANT him or anyone else
    to post 200 BASIC programs here?

    Herb "not a programmer" Johnson
    retrotechnology.com

    Herbert R. Johnson, New Jersey USA
    http://www.retrotechnology.com/herbs_stuff/ web site
    http://www.retrotechnology.net/herbs_stuff/ domain mirror
    my email address: hjohnson AAT retrotechnology DOTT com
    if no reply, try in a few days: herbjohnson ATT comcast DOTT net
    "Herb's Stuff": old Mac, SGI, 8-inch floppy drives
    S-100 IMSAI Altair computers, docs, by "Dr. S-100"

  4. Re: WANTED: Volunteer to Scan Old Programs

    Hello, everybody!

    I have a question about the display of my messages.

    Me, I am now using Google to read the comp.os.cpm Newsgroup (the
    cybercafe that I use deleted the newsreader that I was using from the
    hard disk of my machine).

    I noticed several times that Google Groups return the end of my ASCII
    files to the beginning, before reaching the CR/LF pair, long before
    reaching the right side.

    However, when I type a message (like right now), Google accepts lines
    until they reach the right end, and this is clearly more than 80
    columns.

    So, is anybody out there who has any idea how to let Google Groups
    displays correctly my ASCII messages?

    (I searched in the French on-line help pages, but "proportional text"
    and "fixed font" does not exist in the index. There is also no mention
    of "Code Page 850", the MS-DOS International font that we use in
    Europe, and which is in ROM in all my PCs and printers.)

    If I just needed to add one line of HTML code at the beginning, so
    that 80-columns lines of fixed space characters would display
    correctly tables (or ASCII graphics), it would help, both you (to read
    the original texts) and me (I am annoyed by this problem with the
    display of texts that I spend so much time preparing).

    But I did not find them mentioned in the (French) help files of Google
    Groups.

    Yours Sincerely,
    Mr Emmanuel Roche


  5. Re: WANTED: Volunteer to Scan Old Programs

    > If I just needed to add one line of HTML code at the beginning, so
    > that 80-columns lines of fixed space characters would display
    > correctly tables (or ASCII graphics), it would help, both you (to read
    > the original texts) and me...


    If you can do it with HTML, then try a
     tag at the beginning and
    at the end.

    "The HTML PRE element defines pre-formatted text. The text enclosed
    between the
     
    elements usually retains spaces and line
    breaks. The text renders in a fixed-pitch font."

    I've not tried it using Google groops, so cannot comment. Why not try
    it out on one of the test news groups.

    Hope that helps.

  6. Re: WANTED: Volunteer to Scan Old Programs

    *roche182* wrote on Wed, 08-03-19 18:47:
    > So, is anybody out there who has any idea


    Stop using the cybercafe. Even my way of doing things would be better
    for you. My old Atari newsreader makes a packed file for upload. I then
    make a long distance call to Berlin through a slow modem, upload that
    file to the MAUS, download another packed file in return and hang up.
    Total conmnection time is less than three minutes, affordable even
    through mobile CSD at 9600 bps. I then read and answer all my Usenet
    groups at my leisure.
    Even a long distance call from you to Berlin must be cheaper than, say,
    an hour or two in your cafe.

    There even is a free Atari emulator running under DOS available,
    although I use a commercial one under Windows.

  7. Re: WANTED: Volunteer to Scan Old Programs

    To me, it looks like google groups wraps lines at column 70.

    If you click the "options" link in the thread heading,
    then you have the option of selecting a proportional or
    fixed font.

    Will these show as 3 lines of 90?

    123456789 123456789 123456789 123456789 123456789 123456789 123456789
    123456789 123456789 123456789 123456789 123456789 123456789 123456789
    123456789 123456789 123456789 123456789 123456789 123456789 123456789
    123456789 123456789 123456789 123456789 123456789 123456789

    Phil

  8. Re: WANTED: Volunteer to Scan Old Programs

    Axel_Berger@b.maus.de (Axel Berger) writes:

    >*roche182* wrote on Wed, 08-03-19 18:47:
    >> So, is anybody out there who has any idea


    >Stop using the cybercafe.


    Yes; that is a good hint..!

    > Even my way of doing things would be better
    >for you. My old Atari newsreader makes a packed file for upload. I then
    >make a long distance call to Berlin through a slow modem, upload that
    >file to the MAUS, download another packed file in return and hang up.
    >Total conmnection time is less than three minutes, affordable even
    >through mobile CSD at 9600 bps. I then read and answer all my Usenet
    >groups at my leisure.


    _If_ you don't want an Atari(-Emulator) based solution, I'll think
    that you might like "Crosspoint". That's a native MS/DOS combined
    News/Mail-reader. The working ia almost the same:

    You place a short call to your 'provider' and within one minute
    or tho, you get new News and Mail and you post those messages
    that you produced.


    >Even a long distance call from you to Berlin must be cheaper than, say,
    >an hour or two in your cafe.


    Some years ago I had a (forth-enthusiast) from belgium calling my
    system in northern germany once a week...

    But in any case: Do not use 'google'. Some people filter out all
    messages from any user posting over that system!

    Amicalement, Holger

  9. Re: WANTED: Volunteer to Scan Old Programs

    Hello, Phil!

    > If you click the "options" link in the thread heading,
    > then you have the option of selecting a proportional or
    > fixed font.


    Yes, I sometimes use it. (I had forgotten it!)

    And, indeed, when I use it, the columns of character are now aligned.

    But, is it possible to have Google display automatically a text in
    "fixed font"?

    And how to display the Internation 850 font? (Am I the only one to use
    it?)

    Yours Sincerely,
    Mr Emmanuel Roche


  10. Re: WANTED: Volunteer to Scan Old Programs

    &ltPRE&gt
    3 lines of 80?
    123456789 123456789 123456789 123456789 123456789 123456789 123456789
    123456789
    123456789 123456789 123456789 123456789 123456789 123456789 123456789
    123456789
    123456789 123456789 123456789 123456789 123456789 123456789 123456789
    123456789
    &lt/PRE&gt

  11. Re: WANTED: Volunteer to Scan Old Programs

    &ltPRE&gt
    3 lines of 80?
    123456789 123456789 123456789 123456789 123456789 123456789 123456789
    123456789
    123456789 123456789 123456789 123456789 123456789 123456789 123456789
    123456789
    123456789 123456789 123456789 123456789 123456789 123456789 123456789
    123456789
    &lt/PRE&gt


  12. Re: WANTED: Volunteer to Scan Old Programs

    it seems that you are stuck with 70
    Phil

  13. Re: WANTED: Volunteer to Scan Old Programs

    On 20 Mar, 18:38, roche...@laposte.net wrote:
    > Hello, Phil!
    >
    > > If you click the "options" link in the thread heading,
    > > then you have the option of selecting a proportional or
    > > fixed font.

    >
    > Yes, I sometimes use it. (I had forgotten it!)
    >
    > And, indeed, when I use it, the columns of character are now aligned.
    >
    > But, is it possible to have Google display automatically a text in
    > "fixed font"?
    >
    > And how to display the Internation 850 font? (Am I the only one to use
    > it?)
    >
    > Yours Sincerely,
    > Mr Emmanuel Roche


    Well: a way to convert from 850 codepage and have text correctly
    appears on modern browsers is to convert it to unicode iso8859-1.
    There are various tools that can do this conversion available over the
    internet.
    For an example, Linux-centric peoples (like me) can use GNU "recode"
    with this syntax:
    recode csPC850Multilingual..latin1 file.txt
    Note that recode is able to do conversion from/to virtually any
    charset.

    An equivalent of recode but for M$oft platforms can be found
    here:http://impulzus.sch.bme.hu/dome/conv.eng.html

    A step forward could be the conversion of the unicode generated text
    file in html.
    An excellent and free program for doing this is here:
    http://dukelupus.pri.ee/txt2html.php
    It's a Windows executable but it runs well also on Linux using wine
    ("wine txt2html.exe" from the prompt).

    Hope ths can help.
    Piergiorgio Betti


  14. Re: WANTED: Volunteer to Scan Old Programs

    > it seems that you are stuck with 70

    Phil, I noticed that, each time, the string "&ltPRE&gt" appears on my
    screen.

    That means that it is not recognized by ?!?

    I assume that you are trying to send to the ?!? the string "
    ".

    In my opinion, that's because 1) you don't need the "&lt" and "&gt" to
    surround the "PRE" (only "
    " is enough) and 2) there must be some
    missing statements BEFORE the "
    " and "
    ".

    Maybe you remember that I wrote a WS4-to-HTML File Converter, that was
    published on the comp.os.cpm Newsgroup? I was ogliged to generate a
    "block" of HTML commands at the beginning of the file, before
    translating the WS4 binaries into HTML code and ASCII characters.

    I hope that just a simple modification of WS4HTM.BAS will be enough to
    convert all the 850 characters into those used by Google Groups (else,
    I will write a WS4GOO.BAS, since this Google Groups is the only
    remaining way to reach all the CP/M users in the World).

    Yours Sincerely,
    Mr Emmanuel Roche


  15. Re: WANTED: Volunteer to Scan Old Programs

    *roche182* wrote on Sat, 08-03-22 17:16:
    >I assume that you are trying to send to the ?!? the string "
    ".

    Emmanuel, you of all people should be the one to accept the most
    general rules of "stick to the basics", "keep it compatible", and "no
    HTML on Usenet".

    You either use techniques and machines from the eighties or earlier or
    the most idiotic excesses of today's pointee-clickeeism, never any one
    of the really sensible and worthwhile advances that have occcurred in
    between.


  16. Re: WANTED: Volunteer to Scan Old Programs

    On Mar 22, 11:16 am, roche...@laposte.net wrote:

    > Phil, I noticed that, each time, the string "&ltPRE&gt" appears on my
    > screen.....I assume that you are trying to send to the ?!? the string "
    ".

    The < PRE > and < / PRE > and "&" are HTML commands. A reference for
    HTML commands will describe these items, better than I could. Also,
    it's hard to describe them in email without invoking them!

    > since this Google Groups is the only
    > remaining way to reach all the CP/M users in the World).


    Mr. Roche believes that posting in comp.os.cpm, a Usenet group, is the
    only way to preserve CP/M related material. He seems to believe Google
    Groups is the only way to get to Usenet. A Web search of "usenet
    access" will find many MANY services which provide access to Usenet.
    And a Web search of "CP/M" will find many Web sites, discussion
    groups, Web archives and providers of CP/M oriented documents and
    files. This newsgroup is important, but not the only venue for either
    discussion or for distributing and preserving content on the subject
    of CP/M.

    It's more interesting to do the searches and see what's out there,
    than to describe it. I offer some Web links on my site if you want a
    start. But other sites have sets of links as well.

    http://www.retrotechnology.com/herbs_stuff/s_point.html

    Herb Johnson
    retrotechnology.com

    Herbert R. Johnson, New Jersey USA
    http://www.retrotechnology.com/herbs_stuff/ web site
    http://www.retrotechnology.net/herbs_stuff/ domain mirror
    my email address: hjohnson AAT retrotechnology DOTT com
    if no reply, try in a few days: herbjohnson ATT comcast DOTT net
    "Herb's Stuff": old Mac, SGI, 8-inch floppy drives
    S-100 IMSAI Altair computers, docs, by "Dr. S-100"

  17. Re: WANTED: Volunteer to Scan Old Programs

    On Sat, 22 Mar 2008 09:16:38 -0700 (PDT), roche182@laposte.net wrote:

    >> it seems that you are stuck with 70

    >
    >Phil, I noticed that, each time, the string "&ltPRE&gt" appears on my
    >screen.


    That should have been < and > (semi-colons following each)

    Better yet, just plain
     and 


    As browsers become more strict you may need to wrap these in
    and tags.

    I think your best solution - if you insist on Usenet as your primary
    route to publishing the info you provide - is to type using hard
    returns and lines no longer than about 60 characters.

  18. Re: WANTED: Volunteer to Scan Old Programs

    On 18 Mar, 21:41, Herb Johnson wrote:
    > However, Mr. Roche has a long-standing habit of posting whole
    > documents into comp.os.cpm, for "preservation" purposes. No doubt, if
    > he obtained these 200 plus BASIC programs as files, he'd dump them
    > into the comp.os.cpm Usenet newsgroup, rather than offer them through
    > a Web archive. It's not my job to tell him or anyone else what to
    > post. But let me put it this way: does anyone WANT him or anyone else
    > to post 200 BASIC programs here?
    >


    Despite the fact that posting documents here is not a too bad habit,
    or at least, it wasn't until now, the problem of how to share
    documents/software for all those who don't have a web site, remain...
    a problem. And, in effect, posting more than 200 basic programs here
    could seem a flooding for this newsgroup traffic.
    I can just offer, once again, my site for all the "orphan" material,
    that comp.os.cpm subscribers wish to preserve or share with the rest
    of the world but are in absence of a specific "destination" for it.

    Piergiorgio Betti

  19. Re: WANTED: Volunteer to Scan Old Programs

    pbetti wrote:
    > Herb Johnson wrote:
    >
    >> However, Mr. Roche has a long-standing habit of posting whole
    >> documents into comp.os.cpm, for "preservation" purposes. No doubt,
    >> if he obtained these 200 plus BASIC programs as files, he'd dump
    >> them into the comp.os.cpm Usenet newsgroup, rather than offer them
    >> through a Web archive. It's not my job to tell him or anyone else
    >> what to post. But let me put it this way: does anyone WANT him or
    >> anyone else to post 200 BASIC programs here?

    >
    > Despite the fact that posting documents here is not a too bad
    > habit, or at least, it wasn't until now, the problem of how to
    > share documents/software for all those who don't have a web site,
    > remain... a problem. And, in effect, posting more than 200 basic
    > programs here could seem a flooding for this newsgroup traffic.
    > I can just offer, once again, my site for all the "orphan"
    > material, that comp.os.cpm subscribers wish to preserve or share
    > with the rest of the world but are in absence of a specific
    > "destination" for it.


    All very co-operative, but without details on how to access such a
    site (such as a URL) the offer seems pointless. :-)

    --
    [mail]: Chuck F (cbfalconer at maineline dot net)
    [page]:
    Try the download section.



    --
    Posted via a free Usenet account from http://www.teranews.com


  20. Re: WANTED: Volunteer to Scan Old Programs

    Bill wrote:
    > Herb Johnson wrote:
    >
    >> Mr. Roche believes that posting in comp.os.cpm, a Usenet group,
    >> is the only way to preserve CP/M related material.

    >
    > Probably true, if history is any guide. Virtually every 'early'
    > proprietary venue has disappeared, along with the articles
    > they once carried. That means services AND websites.
    >
    > Usenet continues to stand.


    As a communication mechanism. NOT as a storage mechanism.

    --
    [mail]: Chuck F (cbfalconer at maineline dot net)
    [page]:
    Try the download section.



    --
    Posted via a free Usenet account from http://www.teranews.com


+ Reply to Thread
Page 1 of 2 1 2 LastLast