TL-NCE
Project 3.22
Trends in Telecommunications
and Their Impact on TeleLearning
David C. Coll
Professor Emeritus
Department of Systems and Computer Engineering
Carleton University
A Report Submitted to the TeleLearning Network of Centres of Excellence
March 21, 2000
Table of Contents
Distance Education using Broadband
Applications and Videoconferencing
Videoconferencing Software & Hardware
The Telecommunications Network
Wireless
Local Access (LMDS/LMCS)
Integrated
Services Digital Network (ISDN) Access.
Local Area
Network (LAN) Access
Evolution of the Intelligent Network
An Overview of Technology Trends and
Implications for Research in TeleLearning.
Standards and TeleLearning Technologies
Components of Educational Systems
The objective of this study is "To analyze, assess, and forecast future network trends as they affect TeleLearning and TeleTeaching, and to communicate that knowledge to the TLN as a whole." The study is based on a program of research and development in the assessment and forecasting of broadband, multimedia networks for TeleLearning. The study is directed towards the development of new models of learning and teaching environments, and the infrastructure created by developing technologies for the management, sustainment, and evolution of networked learning.
In this report we present a broad, "big picture" perspective of trends and directions in network technology and computer-communications, particularly with regards to the impact on TeleLearning of emerging standardization of the Global Information Infrastructure.
Following a description of the major trends in telecommunication systems, the report provides a survey of developments in the various components of the telecommunications network. The impact of these developments is then assessed, and the impact on TeleLearning research presented.
The report proposes a way in which the TL-NCE can contribute to the development of a conceptual framework for the TeleLearning environment. The framework consists of an architecture for a TeleLearning environment, comprising all of the means necessary and sufficient for the creation, delivery and support of TeleLearning and TeleTeaching systems.
The
dominant message of the report is that current trends in network development
will inevitably provide network services that will allow the creation of
compete, interactive, interoperable, distance education systems and their
introduction into the learning process.
The
conclusion is that the TL-NCE should aim its research towards a horizon of a
richer, more useable, universal, distributed, intelligent, broadband,
multimedia, communications environment.
The author
acknowledges the invaluable assistance of Ms. Mary Sharkey who gathered,
collated, and annotated much of the support material for this report. Ms. Sharkey prepared the material in
Appendix A. Mary also provided the
summary on distance education using broadband applications and
videoconferencing.
The author is also grateful to Mr. Marc Wong,
who annotated the URL references in Appendix B.
The author is indebted to the TeleLearning Network of Centres of Excellence for their support over the past five years.
Broadly speaking, the objective of this study is "To analyze, assess, and forecast future network trends as they affect TeleLearning and TeleTeaching, and to communicate that knowledge to the TLN as a whole."
That the study has taken place during a period of unparalleled progress in information technology is to state the obvious. Nowhere is the dynamic nature of that change more evident than in what might loosely be called, "computer communications" - the very subject of the study itself. What is not so obvious, perhaps, is that the progress has been in precisely those areas of technology that facilitate TeleLearning - in all its manifestations.
The technological basis for the creation and delivery of TeleLearning systems is changing, at an accelerating rate, in ways that are immensely favourable to TeleLearning. The rate of change, and the directions it is going in, dictate that the TL-NCE focus its attention on the future in anticipation of the communication services, networks and applications that will be available for the creation, delivery, and operation of TeleLearning enterprises. It may be taken for granted that the underlying technologies will be available. However, the rate of change is so great that TeleLearning research must be concerned with the services developed, not the technologies by which they are realized. Otherwise, the obsolescence that is inherent in the developing distributed information infrastructure will leave systems based on today's specific technology high and dry.
This study is based on a program of research and development in the assessment and forecasting of broadband, multimedia networks for TeleLearning. The study is directed towards the development of new models of learning and teaching environments, and the infrastructure created by developing technologies for the management, sustainability, and evolution of networked learning.
In this report we will attempt to present a broad, "big picture" perspective of trends and directions in network technology and computer-communications, particularly with regards to the impact of emerging standardization of the Global Information Infrastructure, on TeleLearning.
The study was predicated upon two basic hypotheses:
1. the connectivity provided by the Internet; the data organization and presentation provided by a web-centric environment; the information processing and display power provided by personal computer workstations, and the capabilities provided by the software that accompanies those machines will serve as a model that is both necessary and sufficient for the development of future TeleLearning systems[1],[2]; and
2. convergent networks will provide the communications services and tools, access, bandwidth, and terminal equipment required for TeleLearning applications.
The impact of these developments on TeleLearning will be determined to a large extent by the use that is made of new and powerful services provided by the communications system. The ability to utilize the new communications/information services provided by the developing convergent distributed information infrastructure will constitute a major factor in TeleLearning effectiveness.
As described in the current literature, for example in the paper by Kazovsky, Khoe, and van Deventer, “Future Telecommunication Networks: Major Trend Projections”[3]:
“Telecommunications is undergoing unprecedented changes. New competitive companies are offering lower costs and driving the introduction of new services. New technologies are emerging, government regulations are being reduced, and the industry is rapidly globalizing, even companies that have traditionally operated in national environments. The efficient transport of information is becoming a key element in today’s society. This transport is supported by a complex information infrastructure that, if properly implemented and operated, is invisible to end users. These end users seem primarily interested in services and cost only.
As new services evolve and the needs of users change, the industry must
adapt by modifying existing infrastructures or implementing new ones.
Telecommunications experts are therefore challenged to produce roadmaps
toward future infrastructures. This
is a difficult task because of incomplete knowledge of trends in users’ demands
and how technology will evolve.” [4]
These ideas are by no means new, but the level of abstraction with which communication networks are being viewed is reaching profound levels. These new views of information system architecture include users and their assets and applications, as well as the comprehensive communications services and the "bitways" used to deliver them[5],[6].
In fact, the international community through the International Telecommunications Union (ITU) is formulating standards for the Global Information Infrastructure (GII) at a level that includes basic features of a global Information Society. The GII is seen as the basis for the Information Society of the future - a technological view that is far removed from the world of protocols, bit bashing, and brand-name loyalties.
The GII Functional Model consists of three classes of functions: Applications Functions, Middleware Functions (service control functions and management functions), and Baseware Functions. The Baseware Functions are "the logical entities which allow application and middleware functions to run, communicate with other functions by interfacing with network functions, and interface to users functions normally associated with operating systems or virtual machines such as a Java virtual machine)"[7]. Baseware Functions include network functions, processing and storage functions, and human-computer interfacing functions. This model of global communications points the way towards the milieu in which TeleLearning applications will have to operate and the market into which they must sell.
The context of progress in Information Technology includes the convergence of telecommunication networks and computer communications. These two areas have been treated somewhat independently over the past thirty years. The former has been characterized as the realm of over-the-hill telephone engineers (the Bellheads), and is seen as based on circuit-switched voice circuits. The latter is seen by some as the domain of with-it computer geeks (the Netheads), and is recognized as the packet-switched Internet. The past year has witnessed the convergence of ideas in both these camps (which, of course, were never really whatever it was that those who thought they were distinct thought they were).
Amazing redesign of the basic networks is taking place. The evolution is driven by the advent of so-called Voice over IP (VoIP), concerns about Quality of Service (QoS), availability of high-speed local access, achievement of terabit transmission rates on optical fibres, the emergence of one-stop shopping through multi-function Competitive Local Exchange Carriers (CLECs), and so on. However, it is not only the networks that are converging.
The information infrastructure has moved from providing
communication links, to providing information distribution, to providing access
to services. Users of
telecommunications may soon deal with a single entity that will provide POTS,
cable TV, Internet Service Provision, audio and video jukebox services,
e-commerce, and distance education.
The revolution is fuelled by progress in networks, platforms, and services.
The report will describe trends in Telecommunications Technology in three major areas; networks, environments, and services. Networks will include such basic features as bandwidth, access, and quality of service. Environments include platforms and learning environments. Services refer to integrated services provided to the client by the telecommunications network. Trends in TeleLearning Technology will review the tools and services available for the creation, production, delivery, support and management of educational services and systems. Trends in Utilization will describe who is involved in distance education using broadband networks, and what they are doing.
The report will conclude with an analysis of the implications of the trends in telecommunications technology for those engaged in distance education and, in particular, researchers in the TL-NCE.
Readers who wish to receive the complete report may contact the author at coll@sce.carleton.ca.
(This section was written by Mary Sharkey)
It is the norm today for Universities and
Colleges to offer on‑line courses for both diploma and degree programs.
On‑line education is experiencing rapid growth. However, there is limited use of advanced broadband applications
and videoconferencing in online course delivery by learning institutions. There
are, however, extensive and advanced networks in use, of which there is much
research and development. T here is big business in the development and use of
middleware and courseware. This software is geared towards learning
institutions for the deployment of online education It is an extensive system
of registration, course delivery, marking and grading databases, administrative
applications, technical support, instructor and student resources.
There are a few differences in the execution of
on‑line courses:
• basic text format where the registered user logs on to
a specified course webpage (at regular intervals) and reads the text‑based
course material. Testing is usually done through a formatted on‑line test
paper that the user fills in and sends electronically via some sort of email
system. Interaction with teachers/professors is limited to text-based computer
conferencing, (ie. email). There may be access to ail online library.
• most common format where the registered user has access
to a course website with a few features. These features may include links to
art online library, to an asynchronous course chat room and shared class mailbox,
emails, to related course newsgroups, to the professor/instructor. All
interaction is restricted to simple computer‑mediated communication.
• advanced format where the registered user has access
to a course website with a multitude of applications on a virtual reality
platform. There may be links to online library, research networks, related
course sites, synchronous and asynchronous chat rooms, email, class mailboxes,
and newsgroups. Most communication incorporates advanced technologies with the
use multimedia applications, collaborative work environments, may have limited
use of videoconferencing, the virtual classroom is highly interactive and
allows for point‑to‑point/multipoint . Seminars may be held via
videoconferencing, or in advanced interactive hypertext format.
• scientiftic formats encompass very advanced technologies
in the applications of
• TeleMedicine
• TelePsychiatry
• TeleRadiology
• Remote doctor/patient interviews
• Remote controlled lab equipment
• Experimental, diagnostic & monitoring
operations
• Data Analysis
• Shared experimental tasks
• Large databases of medical records
• Transfer of sophisticated test, vital signs
& results
• Access to remote expertise
Corporations are the users of the leading edge
technologies when it comes to broadband applications and videoconferencing.
Companies are substituting much travel time for point and multipoint
videoconferencing.
• real‑time conferences
• sales and marketing activities
• application sharing
• presentations/seminars
• employee/corporate training
• job interviewing
• interactive multimedia
• board meetings
• inter/intra company communication
• email and newsgroups
• webcasting
• messaging
• data capture and polling
There is a large amount of Videoconferencing Solutions software and hardware available. These have capabilities ranging from small room conferences to hundreds of delegates in an audience; point‑to‑point and multipoint; application sharing; multimedia interactivity‑, Voice activated microphones, polling, data capturing, data analysis, eye‑tracking cameras. There is a vast array of videoconferencing director
The Telecommunications Network is the means by which information is disseminated or exchanged between two or more users, usually by electromagnetic means. While significant activity is being expended on standards for the Global Information Infrastructure, telecommunications actually consists of a number of disparate, separate networks in which information is exchanged in the form of voice, video, images, facsimile, and data. The networks include telephone networks, data networks, computer communication networks, cable television systems, and terrestrial and satellite broadcasting systems.
The dominant trend over the past few years has been the blurring of the distinction between these networks, as deregulation and technological advances have allowed a wide range of communication services to be offered by telephone carriers, Internet Service Providers, cable TV operators, and broadcasters.
Convergence started at the network layer with the universal adoption of digital transmission and switching. It continued with the introduction of the massive transmission bandwidth and high reliability of fibre optical connections, and multimedia packaging techniques such as ATM and SONET. The trend towards convergent information distribution has continued as broadband digital access to the transmission network has been provided. The most exciting area of convergence is currently at the service level. This is represented in the United States by proliferation of CLEC’s (Competitive Local Exchange Carriers) who can provide subscribers with a package of telephone and fax service, Internet service provision, cable TV programming, and specialized data services. It may well be that these “single-point-of-entry” vendors will be prime suppliers of e-commerce, distance education, telemedicine, and other broadband, multimedia communications in the near future.
Subscriber terminal equipment is also an integral part of the communications environment. As the Sun Microsystems’ motto states, “The network is the computer”. In fact, virtually all computers are connected to a network and it is reasonable to consider that the network starts at the keyboard. Thus, advances and trends in computer technology have a profound influence on telelearning, as discussed in the Section on Platforms.
Regardless of which network one is familiar with, the architecture of a telecommunications network may be viewed in terms of three major components: the access network, the transmission network, and the communication services provided. The following sections will describe current advances in telecommunications technology in terms of these three areas.
The access network connects users to the transmission network: the so-called “last mile”.
The most familiar access network is that which is used to access the telephone network. In most user minds, the physical manifestation of the subscriber loop is the ubiquitous twisted pair of copper wires connecting every telephone through an “ohmic path” to the nearest switching office. In the common telephony application it carries an analog baseband voice signal. This signal occupies a nominal bandwidth of 4 kHz[8].
When used to transmit digital information over the telephone network the local subscriber loop carries an analog representation of the digital information. On dial-up voiceband circuits, a modem is used to generate the subscriber loop signal. It is derived from the logical information so that the transmitted signal will pass through the 4 kHz voiceband network. The maximum data rate with dial-up voiceband modems is 56 kbps, although the maximum rate is rarely achieved. This represents 14 bits per Hertz of bandwidth, or 16,384 distinct signal values for every symbol transmitted. This is an astounding technological accomplishment. Achieving these data rates over a 4 kHz channel requires extremely complex signal coding, error correcting coding, and substantial digital signal processing. It is an indication of the state-of-the-art in signal processing that is at the core of most multimedia communication systems today.
For direct connection to digital services, the subscriber loop may be used to carry digital information as a baseband signal, e.g., as pulse amplitude modulation. Even though the signal is discrete, i.e., capable of taking on only a fixed set of discrete values, it is still analog in the sense of being a continuous waveform. This technique is used in dedicated (so-called) digital access schemes such as ISDN. ISDN, the Integrated Services Digital Network, is a fully digital telephone and data system. It was developed in the 1970’s
It would be of incalculable economic benefit to the telephone companies if they could use their immense investment in in-place twisted-pair subscriber loops to provide wideband multimedia access to the transmission network. It would save the cost of deploying alternatives such as Hybrid Fibre Coax (HFC), or other Fibre-In-The-Loop (FITL) technologies. It would also provide the possibility of wideband data and multimedia service provision to subscribers from a single vendor. The telephone carriers are rushing to deploy wideband access to protect their subscriber market in response to competition from cable and wireless access providers.
The telephone company approach to solving this problem is a range of Digital Subscriber Line (DSL) modulation schemes. When configured as a Digital Subscriber Line, the twisted pair subscriber loop may carry much more data than a voiceband modem, as well as voice service.
The most common DSL service is called Asymmetrical DSL, or ADSL. It provides a downstream data rate of 1.5 mbps, and an upstream rate of 0.5 mbps. The data rate on twisted pair loops is dependent on the distance from the Central Office. The basic design requirement is to provide ADSL over 18,000 feet, a distance that covers 80% of the subscriber loops in the United States. Very High Speed DSL, VDSL, can provide up to 51 mbps (OC-1) over 900 feet paths.
A simpler version of ADSL, called G.Lite, provides a downstream data rate of 768 kbps, and an upstream rate of 384 kbps, at distances up to 4.5 km from the Central Office. Bell Canada markets this service as the Sympatico High Speed Edition (HSE), using Nortel 1 Meg Modems[9].
There are two different ADSL modulation schemes. One is called Carrierless Amplitude Phase modulation. The other is called Digital Multi-Tone (DMT). The DMT signal consists of an array of carrier tones separated by 4.3125 kHz. Each of the tones is modulated by the digital information. The frequencies below the first tone are occupied by Plain Old Telephone Service (POTS) signals. Frequencies between 25 kHz and 164 kHz are used for upstream transmissions and above that (164 to 1,1104 kHz) used by downstream signals. A filter, called a splitter, is installed at the subscriber’s premises to separate the telephone and data signals.
As described above, the DMT signal consists of the summation of a number of harmonically related tones; each weighed by a data signal. This is the definition of the spectrum of a time signal in the Discrete Fourier domain. It is a rather obvious, but clever, step to realize the transmitted signal as the Inverse Discrete Fourier Transform of the DMT signal. Thus, the actual transmitted signal is created as the result of a mathematical computation. At the receiver the data is extracted by computing the Discrete Fourier Transform of the received signal – which turns out to be the optimal receiver!
DMT communications is an excellent example of the application of Digital Signal Processing (DSP). DSP is important for two reasons. First, it allows the direct implementation of theoretically optimal digital communications schemes, as well as inherently digital techniques such as error detection and correction. Second, it is a prime example of a fundamental shift in communications engineering: the replacement of specialized electronic circuit design with DSP software design – as one expert has expressed it, “DSP turns hardware into software”. Modern Digital Signal Processors, very high speed computers with specialized architectures, are a fundamental component in modern multimedia communication systems, where they are used to implement speech and video compression as well as modulation and demodulation. As discussed in the Section on Terminal Platforms, the Multi Media Extension (MMX) Instructions in the Pentium II and III CPU chips are actually a set of DSP instructions. These allow the Pentium to implement signal processing without the need for an auxiliary co-processor, thereby opening up multimedia communications to all Personal Computer users.
It was
thought at one time that cable TV networks would be the medium of choice for
the distribution of broadband services, especially in urban areas[10]. Up until very recently, the cable television
industry was interested only in the broadcast of entertainment television
programming. Although very aware of the
potential for non-programming services[11],
the policy of the industry was "when the time is appropriate, cable will
carry data and other services". In
fact, the cable industry was careful to avoid any suggestion that they were a
telecommunications carrier, and subject to regulation as such. Well, conditions change, and the moves to
service convergence has brought the cable television industry into non
-programming services in a big way.
The
introduction of cable modems provides high-speed, two-way digital
communications into the home. According
to Cable Datacom News[12]:
"A "Cable Modem" is a device
that allows high-speed data access (such as to the Internet) via a cable TV
network. A cable modem will typically have two connections, one to the cable
wall outlet and the other to a computer (PC). Most cable modems are external
devices that connect to the PC through a standard 10Base-T Ethernet card and
twisted-pair wiring. External Universal Serial Bus (USB) modems and
internal PCI modem cards are also under development.
The dominant service is
high-speed Internet access. This enables the typical array of Internet services
to be delivered at speeds far faster than those offered by dial-up telephone
modems. Other services will include access to streaming audio and video
servers, local content (community information and services), access to CD-ROM
servers, and a wide variety of other service offerings. New service ideas are
being developed daily.
Cable modem speeds vary widely, depending on
the cable modem system, cable network architecture, and traffic load. In the
downstream direction (from the network to the computer), network speeds can be
anywhere up to 27 Mbps, an aggregate amount of bandwidth that is shared by
users. Few computers will be capable of connecting at such high speeds, so a
more realistic number is 1 to 3 Mbps. In the upstream direction (from computer
to network), speeds can be up to 10 Mbps. However, most modem producers have
selected a more optimum speed between 500 Kbps and 2.5 Mbps.
An asymmetric cable modem scheme is most
common. The downstream channel has a much higher bandwidth allocation (faster
data rate) than the upstream, primarily because Internet applications tend to
be asymmetric in nature. Activities such as World Wide Web (http) navigating
and newsgroups reading (nntp) send much more data down to the computer than to
the network. Mouse clicks (URL requests) and e-mail messages are not bandwidth
intensive in the upstream direction. Image files and streaming media (audio and
video) are very bandwidth intensive in the downstream direction."
Typically, a cable modem sends and receives
data in two slightly different fashions. In the downstream direction, the
digital data is modulated and then placed on a typical 6 MHz television
channel, somewhere between 50 MHz and 750 MHz. Currently, 64 QAM is the
preffered downstream modulation technique, offering up to 27 Mbps per 6 MHz
channel. This signal can be placed in a 6 MHz channel adjacent to TV
signals on either side without disturbing the cable television video signals.
The
upstream channel is more tricky. Typically, in a two-way activated cable
network, the upstream (also known as the reverse path) is transmitted between 5
and 42 MHz.
This tends to be a noisy environment, with RF interference and impulse noise.
Additionally, interference is easily introduced in the home, due to loose connectors
or poor cabling. Since cable networks are tree and branch networks, all this
noise gets added together as the signals travel upstream, combining and
increasing. Due to this problem, most manufacturers use QPSK or a similar
modulation scheme in the upstream direction, because QPSK is more robust scheme
than higher order modulation techniques in a noisy environment. The drawback is
that QPSK is "slower" than QAM."
Both ADSL and cable modems provide an Ethernet service to an Internet Service Provider (ISP). The ADSL provides a dedicated link to the ISP, while the cable modem subscriber effectively shares a LAN with all the neighbours. Ultimately, the bandwidth of the copper wire connection - the telco's subscriber loop - will reach its limit. The cable TV plant has much more bandwidth available. Whether it is used or not depends on the market demand.
Until fibre
is fully deployed in the local loop, wireless access is one of the most
exciting possibilities for truly broadband access. Although it is susceptible to weather conditions and
interference, wireless has many advantages when it comes to the physical
deployment, so it may be the initial broadband access method.
According
to mm-Tech, Inc.[13],
"Local Multipoint Distribution Service
(LMDS), or Local Multipoint Communication Systems (LMCS), as the technology is
known in Canada, is a wireless, two-way broadband technology designed to allow
network integrators and communication service providers to quickly and inexpensively
bring a wide range of high-value, quality services to homes and businesses.
Services using LMDS technology include high-speed Internet access, real-time
multimedia file transfer, remote access to corporate local area networks,
interactive video, video-on-demand, video conferencing, and telephony among
other potential applications.
In the United States, the FCC became interested
in the possibility of LMDS bringing long needed competition to the
telecommunication marketplace, where it has the use of 1.3 GHz of RF spectrum
to transmit voice, video and fast data to and from homes and businesses. With
current LMDS technology, this roughly translates to a 1 Gbps digital data
pipeline. Canada already has 3 GHz of spectrum set aside for LMDS and is actively
setting up systems around the country. Many other developing countries see this
technology as a way to bypass the expensive implementation of cable or fiber
optics and leapfrog into the twenty-first century.
LMDS systems are cellular because they send
these very high frequency signals over short line-of-sight distances. These
cells are typically spaced 4-5 kilometers (2.5 - 3.1 miles) apart. LMDS cell
layout determines the cost of building transmitters and the number of
households covered.
Direct line-of-sight between the transmitter
and receiver is a necessity. Reflectors and/or repeaters can spray a strong
signal into shadow areas to allow for more coverage. Various isolation
techniques can be used to prevent interference between signals.
A great deal of the early discussion of LMDS
applications centered around the transmission of video. With the recent surge
of interest in the Internet and the accompanying demand for bandwidth, fast
data appears to be the greatest application for this technology.
With 1.3 GHz of spectrum, LMDS can provide a
pipeline for a great deal of data. Homeowners pay about $30 per month for
video, but businesses regularly pay over $1000/month for a high speed T1 (1.544
Mbps) line from phone companies. Using only the 850 MHz unrestricted bandwidth,
along with a modulation scheme such as quadrature phased shift keying (QPSK),
well over 100 T1 equivalent lines can be provided in a cell even without
splitting cells into separate sectors. By using horizontal and vertical polarized
sectors in a cell, LMDS providers will be able to re-use bandwidth and multiply
the number of T1 equivalents available. A typical commercial LMDS application
can potentially provide a staggering downlink throughput of 51.84 - 155.52
Mbp/s and a return link of 1.544 Mbp/s (T1). This capacity translates into
phenomenal potential to provide "full service network" packages of
integrated voice, video and high-speed data services. Actual service carrying
capacity depends on how much spectrum is allocated to video versus voice and
data applications. Assuming that 1 GHz of spectrum is available, an all-video
system could provide up to 288 channels of digital broadcast quality television
plus on-demand video services. Fast data, Internet access, PCS backhaul, local
loop bypass, digital video, digital radio, work at home, and telemedicine are
all possible. In fact, they are all possible within the same cell."
ISDN[14] is a standardized digital network. It was developed in the 1970's to provide digital telephone services. It is based on 64 kbps channels, separate signalling (Common Channel Signalling System 7), and synchronous TDM transmission. ISDN services are available throughout Canada and much of the world. Two service levels are available: the basic rate interface (BRI) which provides two 64-kilobit per second B-channels and a 16-kilobit per second D-channel, and the Primary Rate Interface (PRI) which provides 23 B-channels and a 64 kbps D channel. ISDN operates over a Digital Subscriber Line (DSL) operating at 160 kbps in North America.
The major question about ISDN is why it is not more heavily used. The basic reasons seem to be two-fold. One is cost, of course. The other is the lack of determination in DSL development. When one considers the recent achievements in ADSL and other modems using the twisted-pair subscriber loop, one wonders what "the network" would look like now if ISDN had been deployed.
ISDN is a viable access technology for data rates in the 64 to 1,536 kbps range.
Many users of computer communications access the transmission network through a Local Area Network, or LAN. LAN’s were set up during the 1970’s to allow stand-alone mini-computers to access common files and share printer resources. They grew out of the time-sharing environment of the 1960’s as workstation power developed and as protocols were developed to allow meaningful inter-computer communications.
The original model of the LAN was Ethernet – a shared communications space - a common channel implemented originally on a single coaxial cable to which all of the terminals were connected. It was “well understood” at the time (and the myth persists) that computer communications was “bursty”, i.e., sporadic rather than continuous. This was reasonable at the time when most computer communications consisted of transmitting the contents of a terminal keyboard buffer to a mainframe computer. Thus, it was not necessary to commit a dedicated circuit (or connection) to each terminal. In fact, it was very inefficient to do so, as well as expensive. A common channel could be used as long as a multiple access protocol could be devised. . The Ethernet protocol does just that. It is based on a technique, called ALOHA, used to connect computers on outlying Hawaiian islands to a central mainframe computer over radio channels.
Ethernet was designed for bursty communications, and overloads at about 70% of its nominal traffic capacity. Even so, it dominates LAN technology. This domination, coupled with the fact that the fundamental Ethernet design allows for a gigantic number of uniquely numbered terminals, has ensured that – as so often happens[15] - Ethernet has managed to maintain its competitive position in the access network.
Ethernet competitiveness is also due to the traditional coupling between the Ethernet LAN access and the TCP/IP protocol suite. This coupling is due to the fortuitous, farsighted inclusion of the TCP/IP protocol in the Unix operating system of the Sun Microsystems workstations that populated early LANs. The synergy between Unix, TCP/IP, and Ethernet dominated what has come to be called “computer communications” – the internetworking approach to long distance communications that forms the basis of the Internet. The creatures that dwell in the unregulated, innovative culture that has developed around these technologies have termed themselves “Netheads”, as opposed to the “Bellheads”, who they consider to be those “dinosaurs” who maintain the traditional regulated telephone networks. The dichotomy between these views of communications has seriously affected the advancement of telecommunications, as discussed in the Section on the Transmission Network.
Ethernet data rates started out at 10 mbps; then grew to 100 mbps, and more recently to 1,000 mbps, or 1 Gbps. The physical medium started out on especially high-quality coaxial cable; and has migrated to twisted-pair wiring. Optical fiber is used in the Gigabit Ethernet. The topology of Ethernet LAN’s has migrated from the original bus to a hubbed hierarchical tree. Ethernet’s multiple access protocol has been replaced in many cases by switched hubs. Most modern business premises are wired in this way. Twisted-pair wiring is brought to wiring closets on each floor and interconnected through vertical risers. One of the most remarkable recent technological “discoveries” is that the wiring of the subscriber loops in the local telephone access network resembles the wiring in buildings and the entire Central Office area can be used as an Ethernet! A Nortel system called The Elastic Network operates on this principle. The Sympatico HSE access system uses an ADSL modem to connect the output of an Ethernet card in the home computer to the Internet Service provider.
Cable TV system modems, as discussed below, operate an Ethernet on each branch of the network. Thus, performance depends on the number of users on each segment and their traffic requirements.
Users of LAN’s are accustomed to high-speed local service. The data rate they obtain when accessing remote terminals depends on the bandwidth with which their local LAN operator connects to the Point of Presence (POP) on the Internet backbone. It also depends on the traffic volume at the time, since they are competing
Truly broadband access can be provided by optical fibre connections or by microwave radio circuits.
For a time in the early '90's, the telephone industry
appeared to be focusing on Hybrid Coax Fibre (HFC) as one of the means to
provide broadband access. HFC is a hubbed
system, using fibre optical cable to connect hubs, or switches, to local
coaxial cable distribution plants. There are a number of other proposals to
provide Fibre to the Curb (FTTC), where optical fibre would be brought to the
entrance to a building and the internal copper plant be used for internal
distribution; or, ultimately to provide Fibre in The Loop (FITL), where fibre would
be deployed directly to the subscriber location to match up with internal
fibre.
It is particularly attractive for fibre to be
deployed in new industrial parks, offices buildings, and residences - both from
an economical and marketing perspective.
Very high frequency radio circuits are being
deployed in the Multipoint
Multichannel Distribution Service (MMDS) and the Local Multichannel
Distribution Service (LMDS), called LMCS in Canada. These wireless access technologies have the advantage of low
installation costs, although subscribers require expensive radio transceivers.
The
Transport Network, referred to in the trade as "the cloud", is that
amorphous entity that takes the information from the local access points and
delivers it to the destination. There
are two dominant areas of change in the Transport Network: switching and
transmission. In switching it is the
Circuit
switching is the provision of a continuous path from a source to a destination
for the duration of a call. It requires
that the call be set-up prior to the exchange of information and taken down
after the completion of the call. Other
than propagation delay, information is delivered as it is received, without any
change in the delay or the order of arrival.
Circuit switching is ideal for the transmission of voice and other
continuous information. It is the mode
in the Public Switched Telephone Network (PSTN).
There is no
implication that circuit switched networks are analog networks. To the contrary, the PSTN has been a digital
network for decades. Voiceband signals
are converted to 64 kilobit per second data streams as soon as they enter the
transmission network, i.e., at the termination of the subscriber loop. Circuit switching is performed in
computer-like machines, either spatially (take the bit from this input port and
put it in that output port), or temporally (take the bit from this time slot in
the input frame and put it in that slot in the output frame).
The actual
connection through the network is controlled by messages carried on a
packet-switched data network called Common Channel Switching System 7 (CCS7).
The major
advantage of circuit switching is the guaranteed quality of service that can be
provided. The major drawback is the
fact that transmission bandwidth and switch ports are tied up for the duration
of the call, regardless of whether or not the circuit is being used.
Packet
switching arose from the fact that computer data is 'bursty', that is it
arrives in blocks: it is not continuous.
Since data is bursty, expensive transmission and switching facilities
can be shared: which is more economical.
In packet switching, data is gathered into packets, addressing and
control information is attached to the packet as headers, and the complete packets
are forwarded from one node to another through the network. In a sense, packetization is a form of
statistical multiplexing. There are two
distinct switching modes: connection-oriented and connectionless. In connection-oriented packet switching, the
route from one node (switch) to another is determined before any packets are
sent, so there is a set-up time, but the circuits may be shared, because each
packet is uniquely identified. In the
connectionless mode, each packet is examined at each node, to determine its
destination. Each node knows the 'best'
route to forward the packet to in order to reach the ultimate destination. Maintaining the routing information stored
in each node (in routing tables) requires an extensive data communication system
in the network that run routing maintainance complex protocols.
There are
many packet switching systems: X.25, an aging data service still used
extensively in Canada; Frame Relay, used for voice and data transmission, often
on private networks; Asynchronous Transmission Mode (ATM), the backbone
transport mechanism for multimedia traffic.
ATM can provide emulated circuit-switched service.
The
Internet is a packet switched network of networks, using the TCP/IP suite of
protocols.
IP, the
Internet Protocol, provides an unreliable, best-effort, packet delivery
system. Packets may arrive with
transmission errors, they may arrive out of order, or they may not arrive at
all. The consistency of arrival times
is not assured; there is bound to be considerable variation in the times of
arrival - jitter.
TCP, the
Transmission Control Protocol, provides end-to-end service, but the users are
responsible for the ultimate integrity and accuracy of their data.
The
Internet, as it has evolved to date, is an inhospitable environment for voice
and video, although these problems are being addressed.
The reason
is commercial: the Internet is FREE!
Well, there
are no long distance charges, unlimited connect time is common, and the quality
is improving.
Fibre
optics creates Bandwidth.
Nortel
announced late last year that they had achieved a throughput of 1,592 terabits
per second (Tbps) on a single optical fibre[16]. They used Dense Wavelength Division
Multiplexing (DWDM) to combine 160 OC-192 signals on the fibre. Each OC-192 service has a bit rate of
9,953.28 gigabits per second (Gbps), or 10 Tbps.
The
availability of enormous reliable bandwidth in the core network has led to what
is called "The Death of Distance" in the telephone industry. Long
distance charges are almost nil, or soon will be. Local distribution is the problem.
We are
almost in the situation where there is unlimited bandwidth - at least on fibre
wireline. Unfortunately, this infinite
bandwidth is not deployed very widely, the ability to amplify and switch such
bit rates is barely possible, and the demand for bandwidth seems to rise as
fast as it becomes available.
Quality of
Service (QoS) is the key issue. The
most promising advances are related to ways to maintain the benefits of packet
switched networks by the reservation of network capacity and resources:
transmission bandwidth and space in the router queues. New standards such as MPLS and the Media Gateway Control Protocol (MGCP)
are being developed for telephony over IP-bsed packet networks.
The current
state of standards for IP telephony are shown in the following illustration[17].
According to Andrew Brown[18],
"The Internet
community has been divided for many years over the definition of QOS. Currently it is commonly agreed that QOS
means --- “the capability to
differentiate between traffic or service types so that users can treat one or
more classes of traffic differently that others (i.e., Voice over IP is
different than WWW traffic ).
SLA’s are Service
Level Agreements or contracts between a customer and the service provider. These contracts hold the service provider
to a promise of providing a level of “Quality Of Service” to the customer's
traffic. SLA’s could provide for such
things as - guaranteed levels of packet loss, latency, throughput, network
availability and could be based on time of day etc.
Offering a guaranteed
level of service over packet based networks (inherently connectionless
environments) like the Internet is a major engineering challenge. I believe this will be the biggest
challenge facing Internetwork Engineers in the next decade."
As seen in
the illustration above, network architectures are being devised that will allow
the provision of a variety of services over a single network, with negotiated
QoS. Rapid advances are being made in
the development and deployment of standards for such services as Voice over IP,
multimedia Internet broadcasting, and multi-point conferencing to say nothing
of the infrastructure for secure, reliable e-commerce applications.
The network can and will provide as vast array of services. Messerschmitt describes several in his paper6:
· audio and video transport,
· file management,
· printing,
· electronic commerce mechanisms,
· communications security, and
· reliable data delivery.
These communication services support a number of applications that may be used by system builders: email, telephony, database access, file transfer, www browsing, and video conferencing.
Other services and applications available from the network include audio and video streaming, players, web site management software, and so on. Specific tools are available in great profusion now[19] - they are so diverse that it is difficult to assess them, let alone take the risk of adopting any one of them.
Obviously, when one considers the panorama of tools required to develop a TeleLearning/ TeleTeaching system, this set is just the beginning. The generics behind the tools must migrate to a more accessible state, perhaps within a communications environment before they become interoperable, and accepted. One example of tools for interactive group work, including broadcasts and conferencing is mbone[20].
The modern telephone network consists of two major parts: the access network and the transmission network. The access network comprises the means whereby subscribers connect to the network, whether it is the ubiquitous twisted-pair of copper wires, a cellular radio link, or an optical fibre "drop". The transmission, or core, network consists of trunks and switches and the means to route traffic through the network. The telephone network also includes the means to operate, administer, maintain, and provision the network.
As well, the telephone network includes the means to provide a wide range of services to subscribers. Typical of the latter are 1-800 services, Calling Card Services, Call Waiting, Call Forwarding, the *-code services ( *69 = last number calling), and the ability to accommodate the "equal access" requirements of the regulator.
The flexibility to create and offer new services, as well as contemporaneously incorporating new technologies, derives from a radical restructuring of the basic network. The telephone network has evolved from a circuit- switched, time division multiplexed, digital network with in-band signalling to a heterogeneous network of optical fibre based circuit and packet switched routes, with a plethora of transport mechanisms, controlled by a packet-switched signalling network. As illustrated in the following figure[21], the Intelligent Network, consists of three major components: Data, Switching, and Processing. The architecture is designed to provide access to carrier and third-party Open Distributed Services through a Public Services Support Infrastructure.
In essence, the carrier networks have reached a stage of evolution that is comparable in its seeming simplicity to that of the computer. As the computer can be viewed as having Input/Output, Memory and Processing Elements and the ability to input and output data, store and retrieve data, and perform arithmetic and logical operations on data - and nothing else, actually - so now the networks can be viewed similarly. The generic Data, Switching and Processing functions in the modern network are similar. A request for service (e.g., dialling) is referred to the Data Base where it is interpreted, the information is routed from source to destination, and whatever processing is required is carried out.
The radical paradigm shifts that are occurring are that these generic functions allow the networks to respond to new technologies to create and deploy new services in months or days instead of what used to take 3 to 5 years; and the architecture will be open - accessible to customers to create and manage their own networks with the features and services they require.
The purpose
of this section of the report is to provide an overview of the Information
Technology available for the support and deployment of distance education in
the next two to three years, particularly with respect to telecommunications
systems and services.
The point
of the section is to alert researchers and developers as to the communication
services and modalities that they can expect to be available in the next two to
three years; and towards which they should be orienting their planning, design
and implementations.
Technology
mediated education has bloomed as a result of advances in information
technology[22] and
communications[23] systems and
services. Distance education has been,
and continues to be, one of the major applications of new communication and
computer technologies. In fact, because
of the universality of education, and its dependence on the dissemination of
information, education is often presented as the major application of new
communication technologies. Distance
education, in the context of collaborative spaces, may be the "killer
app" for broadband networks.
In the
telecommunications industry the long-range planning horizon - that is, the time
to deploy new technologies emerging from the research laboratories and the
impudent start-up companies - is two to three years. In the computer industry, particularly in the software domain,
the future is six months to one year away.
Thus, new products and processes are introduced to the market at an
alarming rate, and the trick is to keep ahead of the wave by developing generic
systems at a high level of abstraction.
What can we
count on in the way of information technology?
More of the same! Realization of
promises so old many believe, to their chagrin, that they are no longer merely
promises, but reality. Basically, we
have an unstoppable move towards universal access to digital networks with
immense bandwidth supporting all manners of communications.
Then
question is: "How do we use this technology to enhance all forms of
education?"
It is a
truism to state that once a digital information system is in place - let's say
with interconnected information terminals, distributed processing power, and
software-controlled action effectors - what can be accomplished with the system
will grow rapidly. One need only to
look at the plethora of uses that have grown out of the universal credit card
systems: card-swipe terminals, direct debit cards, ATM's, etc. Or, one can look at the Internet and what
has to be the most remarkable technology of all time - the World Wide Web!
It is
difficult in this environment of rapid innovation, change, and increasing power
for researchers to distinguish between the investigation of fundamental
research questions[24],
the development of innovative applications of new ideas and technologies[25],
or the implementation of operational distance education programs. The
implication for applications research is twofold. First, there is the danger that entrepreneurial drive will
produce packages and/or tools that by-pass any small experimental pilot study
that is built by researchers. The
second danger is that operational packages must be robust and thus require
careful design and implementation, extensive testing, and maintainance which is
usually beyond the capability of experimental groups, if it is ever
incorporated in the first place. The
experience to date has been that mainstream development will provide the
communications services and the software tools to support distance
education. The implication is that
research should be concentrated on fundamental questions, abstract
architectures, and generic capabilities: knowledge, guidelines, and tools. Tools for the tool makers, perhaps.
The integrated networks of the near future are the result of collaboration between ferocious competitors, who must create standard communication interfaces to survive and thrive. Their modus operandi may be the model for the development of the tools necessary to create collaborative distance educational spaces.
The major,
overwhelming trend is towards a single point of connection for broadband,
multimedia communication services - the integrated, convergent network in
reality will be a single point-of-presence.
This point of presence will provide facsimile, audio, and video telephony
services, Internet access, and broadcasting.
The
realization of this service is rooted in competition. The technology (almost) exists.
It consists of access, transmission, services, and platforms - all of it
digital and much of it software driven.
Today's rapid progress consists of the response of major players in the
communications, computer, and entertainment fields to meeting the immense
demand for more and more connectivity and content. Ultimately, competing suppliers will offer a single point of
entry to all manners of information systems.
Even though
the generic nature of the future is certain, it's actual form is unknown. We are entering a period of confusion
characterized by the introduction of new and competing technologies (systems
and services), rapid obsolescence, overblown claims, uncertain quality of
service, and commercial instability - and a major quandary for research and
development planners.
Indications
of progress towards an integrated computer/communications environment are all
around us: Voice over IP (the Netheads vs. Bellheads syndrome, and QoS as a
major factor); streaming audio and video on the Web (the MP3 threat to
copyrights); purely optical backbone networks, ADSL, cable modems, and local
microwave access; the set-top box (the Microsoft - Rogers alliance); the 550 Hz
PC with DSP and 3D graphics instruction sets; and so on. All these developments are leading to new
opportunities for communications users - and distance education is,
potentially, a prime beneficiary.
Telecommunications is an area that depends on, and thrives on, standards. Standards are essential for the successful exchange of information. It does no good to speak English, no matter how loudly or distinctly, to an Indonesian who does not understand a word being said. Hand gestures can be misunderstood. Drawing pictures helps, because a picture can be in a domain of common experience.
In telecommunications, the signalling information on caller's long distance circuits must be understood by the system serving the called party. Again, when new technology, such as ATM transport, is introduced, or when IP networks are used for voice transmission, there must be agreement on exactly what the parameters of each system are.
Standards, in telecommunications, are necessary for interoperability - at all levels. Thus, they are essential. On the other hand, they liberate the designer. Standards apply at interfaces: the plug in the wall, the modulation on the carrier emerging from a modem, the order and meaning of bits in a data packet header.
Technology changes. Telecommunications and computer technology changes faster than any other. So, standards must also change. New standards are required to allow new technologies to be introduced. Other wise, innovators must convince the world to adopt their technology, discard all others, and constrain their progress to that of the closed group.
There are major players in information technology who follow this route. They 'capture' their clients, develop fierce brand loyalty, and provide excellent service and support because they control every aspect of the technology.
On the other hand, in open systems - where all specifications are public - any member can acquire goods and services from competitive vendors and vendors can create products for a wider market[26].
In contrast, there are few universal standards in educational systems. Every jurisdiction has its own objectives and means of attaining them. There are fundamental differences in approach at every level. Inevitably, in computer-aided learning every new product is the result of the efforts a dedicated individual or group. Resulting products are rarely useful to others, or used by them. There is little or no consensus on the best way to attain any educational goals, or on the attributes or usefulness of any tools or educational technologies that are produced. This is a direct reflection of the diversity of educational methods and programs, and of tendency of the educational system to adopt one methodology to the exclusion of others.
The same paradoxes exist in the chaotic world of computers, software, and communications, e.g., Mac vs. PC, Windows vs. UNIX, the IP-ATM-SONET evolution, voice over IP, etc. Each time a new technology is introduced all of the players must respond to protect their interests and make sure that their products are part of the new markets. This is done to a large extent through consortia and forums, as well as the established standards making bodies.
Forums, such as the ATM Forum[27], the ADSL Forum[28], IMTC, the International Multimedia Teleconferencing Consortium[29], provide the venues where standards for basic functionality are developed and supported. The forums ensure that new systems are interoperable - a feature distinctly lacking in distance education technology.
There are distance education consortia such as the Distance Education Standards Forum http://www.uwex.edu/disted/tsfwhat.htm; Atlas - distance education, http://www.savie.com/disted.html; and the International Society for Technology in Education, http://www.iste.org/index.html. These organizations provide information on distance education, and evaluation of communications and software tools. There is not much indication that they have any influence over the adoption of even a common base for education technology.
From a systems engineering point-of-view, there is the need to develop an array of fundamental technology - communication and computers services - to support TeleLearning. These services would form an integrated set of generic tools to support all aspects of TeleLearning. They would support course and program administration, registration, and accounting; course preparation, delivery and support; evaluation; interactive communications, discussion groups, tutorials, and counselling; databases, libraries, and resource centres; websites; simulations, laboratories, and other hands-on experiences; and so on.
The services would be part of an overall conceptual architecture for the creation, administration, delivery, and support of distance education. The architecture would
be conceptually compatible with modern telecommunications and computer systems models as well as educational models.
The TL-NCE could be a forum in which the standardized tools for the support of TeleLearning and TeleTeaching could be developed.
An interesting model for how TL-NCE could lead a national consensus on TeleLearning technology is given by the Computer Science and Telecommunications Committee call for white papers on the Broadband Last Mile, described in Appendix D.
As we have
noted, Distance Education is a killer application for mediated communications.
Computer/communications
technology (informatics) can overcome space and time. In distance education, space may be overcome in the sense that
neither learner nor teacher, student nor tutor, researcher nor information
resources, need be in the same physical place.
Time is overcome in the sense that none of the persons involved have to
be available/accessible at the same time.
The
technology provides more for education.
It supports an automated client-oriented enhancement of educational
administration: admission, registration, scheduling, faculty, student and
course records, standings and promotion, appraisals, evaluations, and so
on. It also provides much subtler
advantages, such as standards conversion from European to Canadian, that enable
broader coverage.
ies where one can locate VC rooms for rent.
Education
comprises
1)
many
components: teaching, learning, practice, review, analysis, tutoring, and
administration;
2)
many
modes: one-to-one, one-to-many, many-to-one, many-to-many;
3)
many
media:
a)
written:
books, notes;
b)
visual:
chalkboard, slides, movie, video, computer; audio: radio, recordings;
c)
data:
instruments, microscopes;
d)
simulation:
experiments, experiences, presence;
4)
many
presentation formats: speech, music, sounds, text, drawings, pictures, graphs,
tables;
5)
many
modalities:
a)
teacher-oriented
« learner-oriented;
b)
self-paced
« schedule-paced;
c)
content-base
« issues-based;
d)
concepts-based
« facts-based;
e)
objective-evaluation
« subjective-evaluation; and
6)
temporal
modes: real-time (synchronous), non-real-time.
Distance
Education has used real-time, or synchronous, communications: ITV,
Stanford - both simplex and duplex,
interactive and not. Asynchronous
learning also took place in assignments, essays, library research, and
preparation for tutorials.
Distance
Education has moved towards non-real-time, or asynchronous, communications -
first with computer-aided-learning, then with computer-mediated learning, and
now with an emphasis on interactive web sites.
That is,
distance education has moved from the traditional "sage on the stage"
teacher-oriented delivery system to a virtual, student-oriented
environment. The move is concurrent
with contemporary trends in education.
The move has been from teacher-oriented to student-oriented, self-paced
learning, with individual objectives and goals.
Access to
prepared instructional material at a time appropriate to the learner's
circumstances, and progress through the material at a rate appropriate to the
individual learner's ability are precepts of modern learning theory. Thus, the development of multimedia,
hyperlinked, database systems - which are universally accessible through web
browsers - have been widely adopted as the dominant means for the realization
of asynchronous distance education.
Synchronous
communications must still play a large and valuable (and perhaps, essential)
role in distance education. For
example, there is no substitute for the knowledgeable service representative,
be she a departmental secretary, a student advisor, or an accounting office
clerk. There is also no substitute for
synchronous access by students, either alone or in a group to a teaching
assistant, or to a professor. There are
many instances where an instructor or graduate supervisor needs to communicate
directly with a student. Many of these
activities are now supported by a venerable technology, namely the telephone.
We are
entering a new era of communications.
It is no longer sufficient to realize that the network extends to the
keyboard (or as the SUN motto used to read: "the network is the
computer"), or to view the worlds of telecommunications and computer
communications as distinct and neither having nothing to do with broadcast
technology.
A
revolution is taking place. It is not
unexpected, but the rate at which it is occurring is remarkable. The nature of the evolution is unexpected. The networks truly are convergent, and it is
the telecommunication carriers who now are leading the change, rather than the
Internet router providers. As the
networks converge behind the scenes, distance educators will be presented with
a range of service offerings that will allow the effective use of synchronous
communications to enhance their educational products and services.
For
example, the TL-NCE is trying to incorporate the use of broadband Internet
capabilities into their operations, as well as their research. The technology is there, but in reality it
isn't. It is not accessible or
available without the intervention of specialists with specific knowledge of
the arcane procedures required to facilitate communications. Interconnections can be made, but it is a
laborious task. The set up of any particular
connection requires individual expert attention. The performance of the connection, once up, is probably not
automatically managed, and the session requires dedicated attention from the
technologists who set it up.
Replicating the connection may or may not be simple, changing it is
not. The network services we expect
for, say, audio teleconferencing is just not there in broader applications.
Groupware -
software that facilitates interactive cooperation among groups of people - is
readily adapted to distance education.
For example, Lotus Notes is used at Carleton University in the following
way.
"USING LOTUS NOTES IN COURSE DELIVERY
Darrell Herauf, an instructor in the School of Business, will deliver this
Friday's TLT Showcase starting at 11:35 a.m. in Room 329 Paterson Hall. For
the past three years, he has been an academic coach in the Executive MBA
program at Athabasca University. This program is offered primarily through
the Internet and uses Lotus Notes, a groupware product that enables students
to access course information and materials, conduct group discussions,
complete team projects, and submit course work electronically. Students,
faculty and staff members are all connected, creating an interactive
learning environment while giving students the support and services they
need. Herauf will demonstrate some of the features of Lotus Notes and
discuss how it could be used in delivering courses at Carleton University.
The Carleton Teaching, Learning and Technology Roundtable organizes the TLT
ShowCase Series. For more information or to suggest a ShowCase topic, please
contact the Teaching and Learning Resource Centre at 520-4433
(http://www.carleton.ca/tlrc)"[30]
There are
at the same time, many, many Internet conferencing tools: CUSee-Me, NetMeeting,
PictureTel's LiveLAN, Intel's Proshare 500, and so on. Most of these now use the H.323 standard for
video telephony over IP, which has replaced H.320, the standard for video
telephony over ISDN. These software
systems are being to make full use of the expanded Digital Signal Processing
instructions on the Pentium III. So,
the software is interoperable, and the computationally intensive parts run as a machine language instruction on the
world's dominant PC. There are even a
number of complete packages using DVC for distance education available. But this technology is still limited. Regular Internet access and transmission
bandwidth are still too low.
Multi-party connections are limited and very few educators or
educational innovators are used to operating in this new milieu.
In fact,
there are many reasons put forward as to why synchronous broadband multimedia
communications - read video conferencing - is not more widely accepted than it
is. These reasons will become less
important, and the potential of synchronous broadband multimedia communications
will become available in distance education, as multi-point, multimedia,
broadband conferencing becomes a communications service.
We have
seen this progression as communications has moved from information
communication (telephone, radio, email) through information distribution
(Internet and the WWW) to services distribution (ISP, on-line banking).
Presumably,
the services required for TeleLearning will be available.
They will
be available as part of the telecommunication system before they can be
developed and deployed by distance education researchers.
Distance
education - mediated education - computer-mediated learning - has gone through
a revolution - a paradigm shift: from communications-mediated transmission to
computer-mediated communications to interactive computer-mediated on-line
The current
status is a virtual learning space.
Current
trends in network development will provide network services that will allow the
re-introduction of the teacher as a subject-area specialist in an active role
in the learning process.
The
dominant message is that current trends in network development will provide
network services that will allow the creation of compete, interactive, interoperable
distance education systems into the learning process.
The
conclusion is that the TL-NCE should aim its research towards a horizon of a
richer, more useable, distributed, intelligent, broadband, multimedia,
communications environment.
[1] D. C. Coll, "Multimedia
Communications: Reaching a Critical Threshold.
The Canadian Journal of Electrical and Computer Engineering, vol. 1-2,
1998.
[2] D. C. Coll, "Dawn of the Digital
World: Universal Multimedia Communication is Now Possible", Post 2000,
October 11, 1997, The Financial Post.
[3] L.G. Kazovsky, G-D. Khoe and M. O. van Deventer, "Future Telecommunication Networks: Major Trend Projections", IEEE Communications Magazine, vol. 36,no. 11, pp. 122-127, November 1998.
[4] See also; R.W. Lucky, "New Communications Services: What Does Society Want?", Proc. IEEE, vol. 85, no. 10, pp. 1536-1543, October 1997.
[5] A. Kovalik, "Reference Architecture for Digital Video/Audio Transfer and Streaming", SMPTE Journal, pp. 544-551, August 1998.
[6] D.G. Messerschmitt, "The Convergence of Telecommunications and Computing: What Are the Implications Today?", Proc. IEEE, vol. 84, no. 8, pp. 1167-1186, August, 1996.
[7] B. W. Moore, "The ITU's Role in the Standardization of the GII", IEEE Communications Magazine, pp. 98-106, September 1998.
[8] The 4 kHz voiceband gives rise to the basic 64 kbps rate of the Time Division Multiplexed (TDM) telephone network. According to the Sampling Theorem, a 4kHz signal must be sampled at least 8,000 times per second. When each sample is quantized into 256 levels, 64,000 bits must be transmitted every second.
[9] The author has observed a downstream rate of 400 kbps on his Sympatico HSE connection. He lives at the limited of coverage and his service is often unreliable. When working, the rate is sufficient to receive high quality NetMeeting video from his Carleton University office, allowing him to keep an eye on the parking lot.
[10] D. C, Coll and K. E. Hancock, "A Review of
Cable Television: The Urban Distribution of Broad-Band Visual Signals",
Proc. IEEE, Vol.73, No.4, pp.773-788, April 1985.
[11] D. C. Coll and K.E. Hancock, "A Study of
the Categorization of Cable TV News Services with Particular Reference to Data
Transmission", a P.A. Lapp Ltd. Report, prepared for Premier
Communications Ltd., December 1980.
[14] http://www.3com.com/nsc/500606.html,
http://www.ralphb.net/ISDN/, http://www.nationalisdncouncil.com/index.html
[15] The axiom that “Mainstream technology, through the massive R&D expended on it, will always outperform specialized machinery constructed to overcome the limitations of mainstream technology by the time that specialized technology is deployed” is referred to by the author as Morris’ Second Law, in deference to Prof. L. Robert Morris (ret.), a colleague who had a genius for implementing complex algorithms in mainstream processors.
[16] In the US and Canada, tera means 10 12, while giga means 10 9. So this number is 1.592 x 10 15 bits per second, or 1.592 thousand million million.
[17] Pierre Lepage, Nortel, Ottawa. Carleton University 94.470 Voice over IP, Lecture Notes
[18] Andrew Brown, Cisco, Ottawa, Internet Routing Architectures, Carleton University, 94.470 Lecture Notes, March 2000.
[19] See, for example, the tools listed in http://www.outreach.utk.edu/weblearning/#Asynchronous WBT Solutions.
[20] mbone was developed in a computer communications lab and has widespread application on the Internet among the scientific community. See http://www.terena.nl/libr/gnrt/group/ for information.
[21] from John Visser, Intelligent Networks,
a 94.470 Telecommunications Engineering
Lecture, Systems and Computer Engineering, Carleton University, March 3, 2000.
[22] By "information technology" we mean the full gamut of technology that provides the means to generate, store, process, disseminate, receive, and display information in any form.
[23] By "communications" we refer to the means to disseminate, transmit, distribute, receive, asnd otheise communicate information in any form: data, text, audio, video, facsimile; in a variety of modes: point-to-point, broadcast; through wireless and "tethered" links, circuits, and networks. We use the term generically to encompass telecommunications and computer communications. In fact, in modern communication systems it is impossible, and/or irrelevant, to attempt to distinguish between computing and communication.
[24] For example, "What bandwidth is required for effective use of television in distance education?", or "What architecture should be employed for the development of distance education systems?", or "What are the effects of communication quality of service on distance education?"
[25] For example, the application of Java applets, or the use of VRML or XML, or the development of a course in Toolbook II, or the use of interactive web site technology.
[26] For information on standards bodies consult: http://techedcon.com/refguide/Standards_Organizations.htm, http://techedcon.com/refguide/Referencepage.htm. http://www3.l0pht.com/~oblivion/design/telecomm.html, http://www.schaffner.com/standardsindex.html .
[27] www.atmforum.com The ATM Forum is an international non-profit organization formed with the objective of accelerating the use of ATM (Asynchronous Transfer Mode) products and services through a rapid convergence of interoperability specifications. In addition, the Forum promotes industry cooperation and awareness. The ATM Forum consists of a worldwide Technical Committee, three Marketing Committees for North America, Europe and Asia-Pacific as well as the User Committee, through which ATM end-users participate.
[28] www.adsl.com The DSL Forum is an association of competing companies. In many cases the Forum identifies a particular requirement for DSL, and then advises another standard body, such as T1E1, of this requirement, in the hopes that it will take suitable action. The DSL Forum has established formal liaisons to key standards bodies and working groups, including: UAWG, ATM Forum, ANSI T1E1.4, ETSI TM6, DAVIC, IETF, ITU. The DSL Forum is recognized for its role as a "forum" -- a place to meet and discuss -- which has just as much value as its technical and promotional work. With nearly 300 members, the Forum consists of the key members of the communications, networking and computer industries.
[29] www.imtc.org The International Multimedia Teleconferencing Consortium, Inc. (IMTC) is a non-profit corporation comprising more than 150 companies from around the globe. The IMTC's mission is to promote, encourage, and facilitate the development and implementation of interoperable multimedia teleconferencing solutions based on open international standards.
[30] Internal newsletter, Carleton University