Worse, traditional textbooks are often merely incomplete revisions of even older books, so that most of what they contain may be decades out-of-date by the time they are published. The faculty of the CIS Department adopted one such textbook, Kroenke/McKinney, Processes, Systems, and Information 1e, as the textbook for all CIS 301 sections. I will update this page frequently, and I'll include topics that are out-of-date in this textbook as they come to my attention.
To extend the window of usefulness of introductory courses, Prof. John Drake (then of of Eastern Michigan University) and I began working on a new approach: Instead of a snapshot of the state of information systems in the present instant, we focused on a hierarchy of concepts and principles (below) basic enough, to remain useful for the rest of your careers. These concepts and principles will remain useful regardless of where your life may take you, not only in business and management but also in any profession and any field of science, scholarship or art. You should be able to use the concepts and principles from this conceptual hierarchy to understand and use the information systems that you will encounter and need in your life, in the organizations that you may work and participate in, and in the global Human Civilization of the information age. I will try, to the extent that there is time, to merge the useful parts of the Department's assigned textbook with our new approach. When following the assigned textbook is time-consuming enough to keep me from exploring every concept and principle in the conceptual hierarchy, I hope that you will continue to explore it outside the classroom, discuss it with your study partner, and ask me questions.
Please do not distract me or your fellow students (some of us may have attention deficits) during class. Do not behave, during class, in any way that would not be acceptable in conference with the CEO (or other chief officer) of an organization in which you work, or in which you would like to work in the future.
I postponed the textbook's opening video to Week 2. To open the course with a video of failure, as the textbook suggests, would have set an anti-productive example for your future work as enterprise managers. The people that you will work with will be far better motivated by the prospect of pride in what they are about to accomplish, than by fear of failing.
I am showing, instead, a video of what one enterprise, TESCO, was able to accomplish with effective use of information systems:
https://www.youtube.com/watch?v=fGaVFRzTTP4
First: the organization, requirements, and the work expected of you in this course (per syllabus, above.)
Then, some notes on topics whose coverage in the required textbook is out of date, or otherwise inadequate. The function of the Chief Information Officer's sub-organization has changed - it does not necessarily concern "Information Systems," but rather the provision of the information services that are needed to support the work of the larger organization. This includes:
Additional discussion points for chapters 1 and 2:
The surface error is at the interface of finance and of human resources. Kelly apparently thought that the standard practice of providing every employee with supervision, and a new employee with mentoring, was an unnecessary expense. Jason - in the textbook's video, the obvious choice for the assignment of mentoring Jennifer - had never heard of her. How could Kelly, a successful entrepreneur, have made such an elementary mistake?
The cause of Kelly's error is a false assumption that is often made by almost everyone: "The other person is just like me. To find out what she will do, and what she needs, I only need to imagine myself in her place, and introspect into how I would act if I were in her situation." But every person is an individual, different from everyone else in nearly everything. Kelly has done great things - immigrating to the United States; starting and growing an enterprise of her own, managing a successful business - without supervision or mentoring from anyone else. How reasonable is it of Kelly to think, that Jennifer also will be able to succeed without mentoring or supervision?
I included the two Corning videos in the lecture preceeding the assignment of Chapter 2, because Corning has earned its success, and plans to continue earning its success, by carrying out an innovation strategy rather than a competitive strategy. Only 30 years ago, 3 companies in Eastern New York State were so dominant in their fields, that it looked like they would always be at the top: Kodak, Xerox, and Corning. Corning followed a consistent innovation strategy. Its innovation strategy resulted in unique products, such as "Gorilla Glass," that literally have no competition. Kodak consistently followed a competitive strategy, and is bankrupt. Xerox followed a competitive strategy, letting others, notably Apple, take on the risks of innovating using the inventions of the superbly creative, but neglected by Xerox management, Xerox PARC (Palo Alto Research Center.) Xerox declined steadily until 2001, when leaders Ann Mulcahy and Ursula Burns changed from the previous competitive strategy to an innovation strategy. Xerox recovered from its long decline, and is becoming a leading provider of cloud-based documentation services for "Enterprise Scale" (large organization) customers.
The first week of the course concludes with 2 videos, "A Day Made of Glass" and "A Day Made of Glass 2." These videos illustrate how Corning, an "old economy" company founded in 1851, uses its information resources to carry out a successful innovation starategy. Please keep these in mind when studying the Organizational Strategy sections of Chapter 1.
Notes from class discussion for Chapter 3:
Loading, starting, and terminating the execution of programs on early computers was a complicated, demanding, and time-consuming job. When computers were first deployed outside of research environments, they were given human "computer operators" - employees trained to do these complex tasks correctly, and with as little waste as possible of valuable computer time between programs.
In 1956, Robert L. Patrick of General Motors Research and Owen Mock of North American Aviation created the GM-NAA I/O software program to automatically execute a new program as soon as the previous program had finished (batch processing). In order to be able to load and start other programs, the GM-NAA I/O software program was always the first program loaded and started after a computer was turned on. GM-NAA I/O resided permanently in the computer's executable storage while running other programs.
Because the function of the GM-NAA I/O software program was to help the computer operator in operating the computer, it was called an "Operating System." The term "Operating System" still refers to software that resides in the computer's executable storage, and that starts, runs, and terminates other programs. The capabilities of operating systems grew, until it became possible to run a computer without employing a computer operator. Today, operating systems are sold as a part of a software distribution such as Microsoft Windows, Apple OS X, Oracle Solaris, or one of the many distributions of Linux.
Mobile operating systems were stripped-down operating systems for devices, such as programmable cell phones, that had very limited hardware. They were derived from desktop/laptop operating systems: iOS from OS X, Android from Linux, and Windows Phone from Windows NT. Recently, Microsoft merged its two branches of Unix-based operating systems, Windows NT and Windows Phone, back into a single operating system distribution, Windows 8. Similar efforts to merge Android back with standard Linux, and to merge iOS with OS X, are in progress.
Unix began in 1969, after Bell Labs terminated its participation in the Multics project. Bell Labs allowed some researchers to continue their work on parts of Multics, using a small computer with only 64K words of memory. They created a new operating system, Unix, by distributing the core functionality of Multix among several pieces, each of which was small enough to fit in less than 64K of live storage. These pieces were connected together with "small languages," later called "interfaces" or "protocols." They discovered that these small pieces were easier to plan, build, and maintain than large, monolithic software systems. When a piece became too complicated, or someone wanted one that worked differently, it could be re-built and replaced, like a block in a toy structure built of Legos. Like the Ship of Theseus, many operating systems that we use today have had all their parts replaced, until not a single original piece remains - yet they are all still recognizable as descendants of the original Unix.
Notes on class discussion of Chapter 4:
The textbooks' coverage of relational database management systems, and of enterprise resource planning technologies, is largely accurate. However, the work of an organization's Chief Information Officer (and of the sub-organizations that report to the CIO) no longer requires, except for legacy systems, a familiarity with the hardware and software systems involved in the provision of database and ERP to the enterprise. As of 2013, it is invariably more cost-effective to obtain database and ERP services from vendors "in the Cloud" (meaning services obtained without regard for the specifics of the systems providing those services.) Cloud service providers such as IBM, Amazon, Oracle, SAP, and others, provision cloud services, and (optionally) interfaces between their cloud services and legacy internal services, either directly or through local and industry-specific resellers.
Notes on class discussion of Chaptes 5-6:
The discussion of business processes omits one of the most important insights learned in recent years: Information service development is not a separate activity, but a part of the development of business processes. The methodology of business process development should be applied to each business process as a whole. Which steps in a business process are to be automated, and how, is a detail that must not be frozen in advance. To try to develop an "Information Service" - rather than to develop a business process that might or might not use one or more information services - is a fundamental error. The textbook perpetuates this error by discussing business process development in Chapter 5, separately from its discussion of "Information System" - today, Information Service - development in Chapter 12.
The methodology of developing business processes (and of developing Information Services and, earlier, "Information Systems") has evolved through 3 historical stages:
(1) Industrial engineering: Also known as "Fordism," this was the first systematic (as opposed to ad hoc) methodology of business method development. It applied the discipline of "engineering" - that is, of mathematical analysis, followed by design, implementation, deployment, and maintenance - to the improvement of business ("industrial") processes. As applied to software, this was called the "software development life-cycle" (SDLC) or, after it started to be depracated for lack of feedback from real-life testing, the "Waterfall Model." Industrial Engineering was successfully applied to manufacturing and distribution processes, and university programs in Industrial Engineering still train experts in these specialized kinds of business processes. In other business areas, "Industrial Engineering," including SDLC in software development, resulted in success less than 40% of the time. As a legacy of this phase, systematic re-development to replace ad-hoc business processes is still called "re-engineering."
(2) Process Quality Assurance. An extension of statistical quality assurance methodologies originally developed in publis health and in agriculture, statistical methodologies for business process improvement de-emphasized systematic analysis and design, replacing them with the results of statistical testing. In place of analysis and design, business processes were to be improved "intuitively," but with statistical controls to assure that changes improved the results. Much of the terminology used in the textbook, such as "Six Sigma," meaning an improvement of six standard reviations above random, dates from this phase.
Since automated computations are deterministic rather than random, the replacement of systematic analysis with ad-hoc "intutive" coding was accompanied, in software development methodologies, by strict enforcement of test plans. The development of an information system was considered successful if the code produced the results that were specified in the prior test plan. However, software coders found that they could easily achieve "success" by writing a quick and crude prototype before the test plan was due, and including the results produced by the prototype as test cases in the plan. When the prototype was buggy, the buggy results from the prototype were frozen into the product - it was easier to code the test plan cases into a software product than to change the test cases. Thus, defects were frozen into products to meet "Six Sigma Quality Targets." For example, a Solaris library for computing trigonometric functions was generally very exact - but gave incorrect results, for a small number of specific input values, that came from test cases in a test plan based on a quick-and-buggy prototype. And at one point, I fixed an error in the billing software for a Dimension telephone switch - and testers insisted that the defect be put back into the code, since the incorrect result had been frosen into the test plan. As more and more such frozen defects were discovered by users, the once promising "Six Sigma" methodolgy faded.
(3) Spiral Development. In spiral development, a business process, or an information service supporting a business process, is developed with successive cycles of analysis, specification, design, implementation, and testing. Systematic analysis, specification and design assure a documented (the specification is often in the form of user documentation,) rationally understandable (and thus maintainable) process. However, the analyses, specifications and designs are never "frozen" or final. If the results of the testing phase show that some aspects of the analyses, specifications, design or implementation need to be changed, they are changed in the next cycle of the spiral.
In spiral development, the target business process is first decomposed into a core functionality, and a set of additional feature sets that can be added to the core functionality, one feature set at a time. The process development team works through several "minor cycles" of the development spiral with a single tester. When the development of the core functionality, or of an add-on feature set, after several minor cycles has been completely successful when tested by the single tester, it is tested again with a whole organization (or, in the case of an externally marketed process, product or service, with several testers or organizations.) A successful test with the "targeted customers," internal or external, completes a "major cycle" of the development spiral.
The tester who works with the development team is delegated to the development team by the organization for which the business process is being developed. Experienced development teams sometimes have a trained observer - often a social anthropologist - whose role, in addition to observations that provide an empirical foundation for analyses, includes identifying a prospective tester for delegation to the development team. Ideally, the delegated tester is a "resource person" (sometimes informally called a "shaman") whom co-workers have been observed consulting about business process problems. Through helping others, the "resource person" becomes familiar with the many business process contingencies that may arise in a real business process. When a process that includes a service or product is being developed for external sale, the tester is often an expert in the targeted business. In the development of consumer services, the best tester may be a cognitive or social psychologist.
Although the "hands-on" exercises for Chapters 6-8 involve the default interfaces of the various SAP services, these default interfaces are very wasteful of valuable (remember the loading factor) employee time, and are almost never used in any business that can buy or develop more specialized, time-saving employee-facing interfaces for their SAP or other information services. Because they are simple, these specialized employee-facing interfaces, usually in-browser applications for stationary workers and mobile device applications for mobile workers, can be readily used without special knowledge or skills. For example, an employee taking an inventory of specialized machinery does not need to learn to identify each machine. It is enough to be able to snap a smartphone picture of its label or bar code.
Because just 4 businesses - SAP, IBM, Oracle and Amazon - account for most of the ERP services used by North American business, most resources can be tracked with interfaces to ERP systems of an enterprise's suppliers and customers. This minimizes the costs of data entry, and the risk of errors. The default interfaces are not used, except as an emergency option in case of network or service downtime.
Chapter 7 class discussion: Follow-up on information service integration between the business and its suppliers (see note for Chapter 6 above.)
Chapter 8 class discussion: Follow-up on information service integration between the business and its customers (see note for Chapter 6 above.)
Customer-facing interfaces to information services (the "e-commerce" web sites and mobile apps) are normally procured through turn-key contracts, where the information-service provider does everything but "turn on the key." Amazon is the largest supplier of customer-facing e-commerce interfaces under turn-key contracts. An innovative or highly specialized business may need to develop its own customer-facing interfaces, as part of new business process development. As of 2013, this is normally done with the spiral-development methodology discussed above.
Class discussion preview for chapters 9 and 10:
Of the two current models of collaboration on projects, the committee/team model and the ownership/delegation model, the textbook only discusses the committee/team model. This is unfortunate, because the ownership/delegation model, pioneered by Linus Torvalds, has been in use since 1991; it has proven faster and sounder for most projects, and is now dominant.
The historical comparison between these two models of collaboration begins in 1983, when Richard Stallman's GNU Project began the development of open-source, GNU-Public-License replacement packages for the components of prior UNIX systems. Most GNU modules were developed by individuals, with little need for formal collaboration. The exception was the kernel (the core component) of the operating system: because the kernel was complex, and required interfaces with many other modules, it required wide-ranging collaboration. This collaboration was (and at GNU still is) carried out using the traditional committee/team model.
The first attempt to build a GNU kernel began in 1983, and ended in 1986 when the GNU kernel committee/team discussions on the architecture of the GNU kernel stalled in deadlock. According to Wikipedia, in 1987 Richard Stallman proposed to use the Mach microkernel developed at Carnegie Mellon University. Work on this was delayed for three years due to uncertainty over whether CMU would release the Mach code under a suitable license. The GNU kernel team re-started the committee/team collaborative effort in 1990, under project name "Hurd."
In 1991, Finnish computer science student Linus Torvalds decided to develop an alternative (later GPL-licensed) kernel, using a spiral development methodology and a novel ("ownership/delegation") model of collaboration. In the ownership/delegation model, decisions are made by an invididual who is said to "own" the project. The "owner" may delegate responsibility for components of the project to others, but still has the authority to make final decisions in case of disagreement among the collaborators, and to revoke delegations. In projects governed by GPL, the source code of all components is publicly available. In case of disagreement with the owner, other collaborators are free to "fork" the project under GPL, but this is rare (the best known instance is the "forking" of LibreOffice from the OpenOffice project.)
Linus Torvalds' kernel project, now called Linux, completed the first major cycle of spiral development and released a kernel with core functionality in less than 6 months. The first complete, stable version of the Linux kernel was released in 1994, just 3 years after the start of development.
In contrast, the development of the GNU Hurd kernel, started in 1990 (before Linus Torvalds started the development of Linux in 1991) frequently stalled while disagreements among team members were being resolved in committee meetings. The development of the core functionality of Hurd took until 2002 (12 years, compared with 6 months for Linux.) A stable version of GNU Hurd is not available as of 2013, 23 years after the start of the Hurd committee/team collaboration.
Soon after the release of the first stable version of Linux in 1993, the use of Linus Torvalds' ownership/delegation model of collaboration spread to collaborative development of business processes, as well as information systems and information services. Collaboration features supporting the ownership/delegation model are available for e-mail (list services,) web (automatic indexing,) Wikis, and content-sharing services (most recently Dropbox.) In contrast, the committee/team model of collaboration is now rarely used, except in government agencies and in very conservative enterprises.
Social media services are evolving rapidly. Social media service enterprises are highly competitive, and often pursue a strategy of "claiming" a population of users with a specific cognitive style, by providing a user interface suited to the cognitive style of this population. For example, I find Facebook's user interface to be very well-suited to my own cognitive style, but I know several brilliant intellectuals who find that they cannot use Facebook effectively. On the other hand, I find the user interfaces of Google Groups, LinkedIn, and Twitter uncongenial, but each has enthusiastic devotees.
In the longer term, the effects of individual differences in cognitve style are aggravated by by the forming and "overlearning" of idiosyncratic habits based on different user interface styles. The automatic carrying-over of habits learned in an incompatible environment (in psychology this is called "proactive interference") can be anti-productive and even destructive.
We know, from the history of other information services (such as web search services) that information service providers eventually learn to provide ways to customize one's user interface, to accomodate (or at least not interfere with) one's individual cognitive style and overlearned habits. Today, however, we are still months (perhaps even years) before "social media" services follow suit. In the meantime, an organization whose people have diverse cognitve sytles and varied overlearned habits - as is almost always the case - would be prudent not to require the use of any one specific social media service, until the user interfaces of social media services become adequately customizable. In the meantime, collaboration can be supported with mailing lists, wikis, automatically indexed web sites, and content-sharing services.
To use information resources effectively, you should be familiar with the essential concepts of their use. The indentation of concept and principle links corresponds to their places in a conceptual hierarchy. Because many concepts have multiple inheritance, this hierarchy is convenient but not unique. When you find yourself needing to know more, during or after this course, use the links below first. As with all other components of this course, please share the links that you follow, and discuss them, with your study partner; remaining questions should be e-mailed to me.